阅读视图

Step Back Writing

I’m currently listening to Small Teaching by James M. Lang, so I’ve got baseball metaphors on the brain. Lang’s organizing framework for the pedagogy that he’s advancing is “small ball,” all the baseball maneuvers that consistently lead to positive outcomes but are not flashy. Think bunting and stealing bases as opposed to home runs and grand slams. Lang’s idea is that big pedagogical impact can come from small changes, modifications that aren’t flashy but that you could implement tomorrow.

I had a very short and mediocre career as a little league baseball player. If memory serves, I got hit with the ball once and it was all over. I was afraid of pitches forever, and I quickly lost interest in playing due to a fear of bodily harm. The physical “trauma” meant that I just could not find any joy—any play—in the sport. My other main little league memory is a particular exercise that we used to do for throwing that I’ve seen online described as “step back throwing.”

The idea behind step back throwing is pretty straightforward. Two people start fairly close together. One person throws the ball to the other. If it’s a successful catch, you take a step back to increase the distance. You repeat this process such that you gradually move farther and farther apart. If you ever drop the ball, you pause or take a step forward to close the distance. The process develops your ability to throw at longer distances. Once you reach the upper limit of your ability, you’ll hover around exactly the space that you need to work on. Lots of meaningful practice just where you need it.

I want to put this baseball pedagogy conversation in dialogue with Miriam Posner’s reflections on teaching writing in the AI over on Bluesky. She writes (had to disable the embed for the Scholars’ Lab site for reasons, so quoting here):

  One way of thinking about it is, why wouldn’t *I* use ChatGPT to write a paper?

  1. It’s a matter of self-respect.
  2. I believe my writing says something basic about who I am.
  3. I believe research and writing are valuable activities.
  4. I don’t want to contribute to a harmful industry.
  5. I can write better than ChatGPT.

  So, in some ways, our question should be: how do we get students to a point where these things are true for them, too?

I love Posner’s list, which does a great job of pointing out reasons why we might articulate to students the need to care about writing. I want to add one other point: writing can be fun. For so many people writing feels painful, but it need not be that way. Would it help articulate the value of writing if our pedagogies re-introduced joy? So often writing feels like a high stakes chore for students, but how can we reintroduce play into the process?

I’m interested in the kinds of exercises, writing or otherwise, that can reintroduce ludic constraints to the work. Here’s one idea, based on the baseball metaphor I can’t stop thinking about. I’m calling it “step back writing.”

Take a particular course topic, book, or article, and write a three-word sentence on it. Then, repeat the process iteratively, adding a word each time. So you start out with three words, then four, then five, etc. You might start with different versions of the same sentence, but the sentence will inevitably grow and develop in new ways and become something else entirely. Pick a certain point at which you stop lengthening (in this example I arbitrarily stopped at twenty words). You could stop there, but try instead to iterate backwards, shaving off one word at a time. Be careful not to just copy and paste the same sentences in reverse, the goal is to wind up with a different three-word phrase at the end.

Here’s an example, where I start out with a three-word phrase, iterate up one word at a time, then go back down:

  • Writing is joy.
  • Writing can be fun.
  • Surprisingly, writing can be fun.
  • Make writing fun for your students
  • Can you try to make writing fun?
  • Why would you try to make writing fun?
  • Writing does not have to be like pulling teeth.
  • When was the last time you hated your own writing?
  • Who was it that made you find love in your writing?
  • For me, the most important part of writing has always been motivation.
  • Motivation is the process of rewarding effort with something that you care about.
  • Unfortunately, part of the challenge is that everyone will get motivation from different things.
  • I always paid the most attention to the teachers who brought joy into the classroom.
  • Some might view a pedagogy of joy as unserious, but joy can come from many things.
  • I am not suggesting that you bring a persona into the classroom that feels inauthentic to you.
  • It could be argued that writing is serious business, but why not help students find other ways in?
  • What do we need to know about students’ lives to make them care about the work that we do?
  • Of course, you have to be true to your own teaching persona, and this might not make sense for you.
  • I think it could it be worth asking students if working with AI to write sparks joy for them.
  • If writing doesn’t bring a sense of pleasure to students, what might that say about the writing instruction?
  • Is writing something we teach our students at all, or is it just something that happens offstage?
  • Can we blame students for looking for writing instruction elsewhere if it isn’t in the classroom?
  • What is AI teaching our students about the written word and why is that attractive?
  • How can we show students a kind of writing that heals past writing traumas?
  • Most students probably find writing to be just a hurdle to jump through.
  • Why do some avoid hurdles while others go on to become hurdlers?
  • ChatGPT offers fast-food writing for our students—easily generated and easily consumed.
  • How can students slow down and sit with their writing?
  • What is the first introduction to writing for students?
  • Was it something that made their hearts sing?
  • How do we make them care again?
  • What does it mean to play?
  • What can make writing playful?
  • Why do we play?
  • What motivates students?

The exercise was something of a pain to go through at times, but it started to feel like poetry by the end. And while you could certainly dump this kind of exercise into a ChatGPT prompt, that’s not quite a concern here. My goal is explicitly not to develop writing exercises that are somehow AI-proofed, that students can’t execute with a tool. Instead, I want to think further about why we write, how we talk about it, and how we instill different kinds of relationships to it with the exercises we offer students. Afterwards, we might ask our students to vote on who wrote the most moving three-word sentence, or for the clearest sentence of greatest length. We can make a game of it. Joy and play certainly aren’t the only reasons we write, and they won’t be the primary frames for many instructors. But perhaps creative approaches to writing instruction can help students to re-evaluate their own relationship to the written word.

  •  

Reframing AI with the Digital Humanities

A version of this piece will be an open-access chapter in a volume by invited speakers at the 10/23/2024 “Reimagining AI for Environmental Justice and Creativity” conference, co-organized by Jess Reia, MC Forelle, and Yingchong Wang and co-sponsored by UVA’s Digital Technology for Democracy Lab, Environmental Institute, and School of Data Science. I had more to say, but this was what I managed inside the word limit!

I direct the Scholars’ Lab, a digital humanities (DH) center that’s led and collaborated on University of Virginia ethical, creative experimentation at the intersections of humanities, culture, and tech since 2006. A common definition of DH encompasses both using digital methods (such as coding and mapping) to explore humanities research questions (such as concerns of history, culture, and art); and asking humanities-fueled questions about technology (such as ethical design review of tools like specific instances of AI). I always add a third core feature of DH: a set of socially just values and community practices around labor, credit, design, collaboration, inclusion, and scholarly communication, inseparable from best-practice DH.

I write this piece as someone with expertise in applicable DH subareas—research programming, digital scholarly design, and the ethical review of digital tools and interfaces—but not as someone with particular experience related to ML, LLMs, or other “AI” knowledges (at the levels that matter, e.g. code-review level, CS-journal-reading). A field of new and rapidly evolving tools means true expertise in the capabilities and design of AI is rare; often we are either talking about secondhand experiences of these tools (e.g. “Microsoft Co-Pilot let me xyz”) or about AI as a shorthand for desired computing capabilities, unfounded on familiarity with current research papers or understanding of codebases. (A values-neutral claim: science fiction authors without technical skillsets have helped us imagine, and later create.)

Convergence on the term “data science” has both inspired new kinds of work, and elided contributions of the significantly overlapping field of library and information studies. Similarly, “AI” as the shorthand for the last few years’ significant steps forward in ML (and LLMs in particular) obscures the work of the digital humanities and related critical digital research and design fields such as Science and Technology Studies (STS). When we use the term “AI”, it’s tempting to frame our conversations as around a Wholly New Thing, focusing on longer-term technical aspirations uninhibited by practical considerations of direct audience needs, community impacts, resources. While that’s not necessarily a bad way to fuel technological creativity, it’s too often the only way popular conversations around AI proceed. In one research blog post exploring the moral and emotional dimensions of technological design, L.M. Sacasas lists 41 questions we can ask when designing technologies, from “What sort of person will the use of this technology make of me?” to “Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t?” We don’t need to reinvent digital design ethics for AI—we’ve already got the approaches we need (though those can always be improved).

When we frame “AI” as code, as a set of work discrete but continuous with a long history of programming and its packagings (codebase, repo, library, plugin…), it’s easier to remember we have years of experience designing and analyzing the ethics and societal impacts of code—so much so that I’ve started assuming people who say “LLM” or “ML” rather than “AI” when starting conversations are more likely to be conversant with the specifics of current AI tech at the code level and CS-journal-reading level, as well as its ethical implications. The terms we use for our work and scholarly conversations are strategic: matching the language of current funding opportunities, job ads. We’ve seen similar technologically-vague popularizing on terms with past convergences of tech interest too, including MOOCs, “big data”, and the move from “humanities computing” to the more mainstreamed “digital humanities”.

Digital humanities centers like our Scholars’ Lab offer decades of careful, critical work evaluating existing tools, contributing to open-source libraries, and coding and designing technology in-house—all founded on humanities skills related to history, ethics, narrative, and more strengths necessary to generative critique and design of beneficial tech. Some of the more interesting LLM-fueled DH work I’ve seen in the past couple years has involved an AI first- or second-pass at a task, followed by verification by humans—for situations where the verification step is neither more onerous nor more error-prone than a human-only workflow. For example:

  • the Marshall Project had humans pull out interesting text from policies banning books in state prisons, used AI to generate useful summaries of these, then had humans check those summaries for accuracy
  • Scholars Ryan Cordell and Sarah Bull tested Chat GPT’s utility in classifying genres of historical newspaper and literary text from dirty OCR and without training data, and in OCR cleanup, with promising results
  • My Scholars’ Lab colleague Shane Lin has been exploring AI applications for OCRing text not well-supported by current tools, such as writing in right-to-left scripts
  • Archaeologists restoring the HMS Victory applied an AI-based algorithm to match very high-resolution, high-detailed images stored in different locations to areas of a 3D model of the ship Alongside any exploration of potential good outcomes, we need to also attend to whether potential gains in our understanding of the cultural record, or how we communicate injustice and build juster futures, are worth the intertwined human and climate costs of this or other tech.

One of DH’s strengths has been its focus on shared methods and tools across disciplines, regardless of differences in content and disciplinary priorities, with practitioners regularly attending interdisciplinary conferences (especially unusual within the humanities) and discussing overlapping applications of tools across research fields. DH experts also prioritize non-content-agnostic conversations, prompted by the frequency with which we borrow and build on tools created for non-academic uses. For example, past Scholars’ Lab DH Fellow Ethan Reed found utility in adapting a sentiment analysis tool from outside his field to exploring the emotions in Black Arts Poetry works, but also spent a significant portion of his research writing critiquing the biased results based on the different language of sentiment in the tool’s Rotten Tomatoes training dataset. (ML training sets are an easy locus for black boxing biases, context, and creator and laborer credit—similar to known issues with text digitization work, as explored by Aliza Elkin’s troublingly gorgeous, free Hand Job zine series capturing Google Books scans that accidentally caught the often non-white, female or non-gender-conforming hands of the hidden people doing the digitizing.)

We already know where to focus to produce more beneficial, less harmful, creative digital tools: social justice. At the Reimagining AI roundtable, my table’s consensus was that issues of power and bias are key not just to reducing ML harms, but to imagining and harnessing positive potential. Key areas of concern included climate terrorism (e.g. reducing the energy costs of data centers), racism (e.g. disproportionate negative impacts on BIPoC compounding existing economic, labor, and police violence threats), human rights (e.g. provision of a universal basic income easing concerns about areas ML may beneficially offset human labor), and intertwined ableist and computing access issues (e.g. AI search-result “slop” is terrible for screen readers, low-bandwidth internet browsing). In our existing scholarly fields and advocacy goals, where are current gaps in terms of abilities, resources, scale, efficiencies, audiences, ethics, and impacts? After identifying those major needs, we’re better positioned to explore how LLMs might do good or ill.

  •  

AI staff speaking

“Reimagining AI for Environmental Justice and Creativity” symposium: Jeremy Boggs, Amanda Visconti, Will Rourk, and Alison Booth were invited as expert speakers on AI intersections w/cultural policy, heritage, creativity (10/23; sponsors UVA Karsh Digital Tech for Democracy Lab, Environmental Institute, Data Science).

  •