普通视图

Received before yesterday

Introducing MuSE: Linking Music-Theoretical Concepts Across Languages

2026年4月13日 23:37

This past January, the CDH kicked off a new Collaborative Research Partnership with Anna Yu Wang (Assistant Professor of Music) and Jürgen Hackl (Assistant Professor of Civil and Environmental Engineering), the researchers behind the larger Music Theory in the Plural project. MuSE—short for Multilingual Semantic Embeddings—asks: when scholars write about music theory across languages, including Chinese, Japanese, Spanish, and Portuguese, are they talking about the same things? The project sets out to evaluate whether multilingual LLMs can translate the domain-specific discourse of music theory without flattening its nuance, and to test computational methods for discovering related concepts across those language traditions.

Led by RSE Laure Thompson, the CDH team is working with the scholar-translated articles in Music Theory Online volume 30, number 4, which presents articles written in Chinese, Japanese, Portuguese, and Spanish (among others) alongside their English translations. This resource allows the team to assess automated translation against expert scholarly ones. From there, the team will experiment with embedding models as tools for surfacing cross-linguistic connections across a broader corpus. The work aims not just to answer these questions for music theory, but to contribute broader, critically needed comparative research on how LLMs perform with specialized humanistic content.

Like all CDH Research Partnerships, MuSE begins with a project charter that defines the project's scope, deliverables, team roles, and terms of collaboration. The MuSE collaboration is one small part of the larger research agenda of the Music Theory in the Plural project, and it is through chartering that the project team and the RSE team define how the pieces will fit together. You can download the MuSE project charter here and follow the project's progress on its project page.

The Center for Digital Humanities is seeking proposals for innovative and computationally-engaged research partnerships from Princeton faculty. The next application cycle closes April 17, 2026. Apply now.

MuSE (Multilingual Semantic Embeddings)

Linking concepts in music-theoretical texts across languages

Built by CDH
music

Collaborative Research Partnerships

Faculty are welcome to apply to work with the CDH Research Software Engineering team!

collaborate_AdobeStock_429621181

Journal of Cultural Analytics Enters New Chapter with CDH, Joins Open Journals Collective

2026年2月18日 20:41

In January 2026, Princeton University's Center for Digital Humanities (CDH) began serving as publisher of the Journal of Cultural Analytics (JCA), a leading open-access publication in computational approaches to culture. Today, CDH announces JCA’s vision for expanding cultural analytics scholarship amid rapid technological change and the launch of a new website, supported by Schmidt Sciences’ Humanities and AI Virtual Institute (HAVI).

"The Journal of Cultural Analytics has been instrumental in advancing computational methods in the humanities," said Meredith Martin, faculty director of the CDH and professor of English at Princeton, who serves as one of the journal's three editors alongside Amelia Acker (Rutgers University) and Tanya Clement (University of Texas at Austin). "We are honored to lead JCA's continued evolution and grateful to Andrew Piper for his pioneering work in establishing this field-changing, scholarly venue."

Building on a Strong Foundation

Founded by Piper at McGill University's Department of Languages, Literatures, and Cultures, JCA has published groundbreaking data-driven research about culture since 2016. The journal encourages transparent research practices, including open sharing of data and code. It has become a cornerstone publication for scholars working at the intersection of digital humanities, computational social sciences, and computational approaches to culture.

"The idea for the journal was born in 2015 as a response to a shared sense that our field needed a venue dedicated to the critical use of computation to study culture," said Piper. "After a decade of growth, the journal has far exceeded my hopes. I'm extremely happy to see it continue under the leadership of the new editors and its new institutional home at Princeton's Center for Digital Humanities."

Looking Ahead: Expanding Scope and Impact

JCA is broadening its vision to serve an ever-evolving interdisciplinary and international scholarly community invested in cultural study and the methods by which we interrogate the digital in culture – especially in the age of AI. Central to this vision is the commitment to publishing work that goes beyond method for method's sake, asking instead how computational approaches to culture at scale can reshape what we know and how we know it.

The editorial board has expanded to 43 scholars representing institutions across North America, Europe, Asia, and Australia, reflecting JCA’s commitment to international perspectives and increasing representation from junior scholars. This expanded scope will support the journal’s growing focus on multi-lingual and multi-modal approaches to culture.

A new Special Features section, edited by Laura McGrath (Temple University), will highlight shorter, timely essays on computational cultural analysis written in an accessible style for non-specialist audiences, designed to spark discussion on new methodologies, datasets, or research.

JCA will deepen its focus on critical engagement with data, which is increasingly significant for AI researchers returning to smaller, human-curated cultural models. Welcoming a new data editor, Sarah Reiff Conell (Princeton University Library), JCA will revise its data-essay and dataset-review format in collaboration with the scholarly “data collectives” (such as Post45 and 19thC Data Collective), and provide a directory of datasets for cultural studies.

Upcoming Special Issues will explore topics ranging from computational humanities in the Global South to data-driven approaches to poetry and a retrospective on ten years of the JCA. The journal is currently accepting Special Issue proposals for 2027.

New Infrastructure for Open Access, Community-Led Publishing

The transformative support from Schmidt Sciences’ HAVI program has enabled JCA's growth and modernization, expanding the editorial team with new roles for graduate students—providing both recognition and compensation for the labor required to run an academic journal and an opportunity to train the next generation of computational humanities scholars. The grant has also enabled the journal to migrate to Janeway, an open-source publishing platform developed by the Open Library of Humanities, featuring a redesigned user interface and customizable workflow management system.

In this new phase, JCA maintains its commitment to diamond open access—free to read and free to publish, with no article processing charges (APCs) or publishing fees for authors or universities. JCA has also joined the Open Journals Collective, a coalition of libraries and university-based publishers that launched in March 2025, providing journals with technological support, financial sustainability, and community governance through a library-funded model that keeps research freely accessible and journals editorially independent.

"I'm thrilled to have such a prestigious Princeton journal carrying the banner for diamond open access as part of the launch collection. We're excited for JCA, and for the promise of the new, sustainable funding model OJC is delivering," said Matthew Kopel, Princeton's Open Access & Intellectual Property Librarian, who also sits on the Open Journals Collective Library Board.

More information about the journal's new direction, upcoming issues, and submission guidelines can be found on JCA's newly launched platform at https://culturalanalytics.org.

Editorial Team

Editors

  • Meredith Martin, Princeton University
  • Tanya Clement, University of Texas at Austin
  • Amelia Acker, Rutgers University

Special Features Editor

  • Laura McGrath, Temple University

Data Editor

  • Sarah Reiff Conell, Princeton University Library

Graduate Editorial Assistants

  • Cecelia Ramsey, Princeton University (Managing Editor)
  • Odalis Garcia Gorra, University of Texas at Austin
  • Haiqi Zhou, McGill University
  • Emilien Arnaud, Princeton University

Former Editorial Assistant

  • Katrin Rohrbacher
Three logos on a white background for the Center for Digital Humanities at Princeton, Schmidt Sciences, and JCA, displayed side by side.

About the Center for Digital Humanities

Princeton's Center for Digital Humanities, founded in 2014, advances computational and data-intensive humanities scholarship through collaborative research, innovative pedagogy, and community building to create a more just future. The center develops better practices in technological development and research while bringing humanistic perspectives to data science applications.

About the Open Journals Collective

The Open Journals Collective is a growing coalition of libraries and university-based publishers providing sustainable, community-led alternatives to commercial academic publishing. Through diamond open access and collective funding models, OJC supports hundreds of journals while ensuring research remains freely accessible to all.

About Schmidt Sciences

Schmidt Sciences is a nonprofit organization founded in 2024 by Eric and Wendy Schmidt that works to accelerate scientific knowledge and breakthroughs with the most promising tools to support a thriving planet. The organization prioritizes research in areas poised for impact, including AI and advanced computing, astrophysics, biosciences, climate, and space—as well as supporting researchers in a variety of disciplines through its science systems program. The Humanities and Artificial Intelligence Virtual Institute (HAVI) intends to spur innovative, domain-specific research outcomes from humanities scholars through the integral application of AI-inspired tools and techniques, as well as produce insights from the humanities that will advance the development of AI.

Meredith Martin part of $450K HAVI Award on Johns Hopkins-Led Project Analyzing Hierarchical Structure in Poetry, Music, and Narrative

2025年12月12日 00:21

Congratulations to Meredith Martin, Professor of English and Director of the Center for Digital Humanities, who has been selected for a 2025 Humanities and AI Virtual Institute (HAVI) award by Schmidt Sciences.

Professor Martin is part of a grant of up to $450,000 led by Tom Lippincott (Johns Hopkins University) with co-PIs John Hale (Johns Hopkins University) and Robert Lieck (Durham University). The project, "An ML Toolkit to Find Hierarchical Structure in Multi-Modal/Lingual Data," will develop computational methods to analyze structural patterns in poetry, narrative fiction, and music across different languages and historical periods.

The collaboration brings together expertise in literary studies, linguistics, musicology, and machine learning to create tools that both apply AI to illuminate cultural artifacts and draw on humanistic understanding to advance how AI models learn sequential structure. The interdisciplinary team includes researchers from Johns Hopkins University, Princeton University, and Durham University.

About the Project

The research addresses a fundamental challenge: how humans experience and create hierarchical structure in cultural artifacts under cognitive limitations. From poetic meter and narrative patterns to musical form, structure guides creative expression and shapes how we perceive and remember art across time and cultures.

The team will develop a Python library for defining, training, and interpreting sequence models designed to infer hierarchical structure, along with three humanistic case studies examining poetry, language and narrative, and music. Professor Martin will lead the poetry case study, investigating how poetic structures—including form, rhyme, and meter—have been deployed, interpreted, and taught over more than a millennium of English verse. Working with the Chadwyck-Healey English Poetry corpus of 336,180 poems written between 900 CE and the present day, the research will explore questions such as when and where poetic meter is predictably regular or irregular, and how metrical and rhythmic patterns carry meaning across time.

Broader Impact

Schmidt Sciences has awarded $11 million to 23 research teams around the world who are exploring new ways to bring artificial intelligence into dialogue with the humanities, from archaeology and art history to literature, linguistics, film studies, and beyond. As part of the Humanities and AI Virtual Institute (HAVI), these interdisciplinary teams will both apply AI to illuminate the human record and draw on humanistic questions, methods, and values to advance how AI itself is designed and used.

Read more about all 23 awarded projects: https://www.schmidtsciences.org/havi-2025-announcement/

Celebrating the HAVI Community

Beyond celebrating Professor Martin's award, we're thrilled to see so many CDH colleagues and collaborators among this year's 23 HAVI awardees—a testament to the vibrant, interconnected community advancing digital humanities research.

Peter Henderson (Computer Science, Princeton University) is developing "AI for Understanding the Law and its Evolution"—creating AI tools to trace how legal ideas spark, spread, and change across centuries of multilingual, multimodal legal texts.

Jim Casey (UC Santa Barbara) is leading "Communities in the Loop"—developing AI methods to identify veiled protest in 19th-century Black newspapers. Jim is a former CDH postdoc, and we're proud to see his vital work recognized.

Peter Bol (Harvard University), a Princeton PhD alum from 1980, is leading "Augmenting Retrieval for Eurasian Languages"—training multilingual AI models to study Asian-language manuscripts, including low-resource languages like Tibetan, to reduce bias in historical research.

David Bamman (UC Berkeley) is creating Kinolab to bridge large-scale computational analysis with close viewing of film and television. David has been a valued collaborator through our New Languages for NLP initiative, LLM speaker series, and Ends of Prosody event.

Co-investigator, Lauren Tilton (University of Richmond), spoke at CDH in October 2024 on "Distant Viewing: AI and Ways of Seeing" for our Humanities for AI series.

Matthew Wilkens (Cornell) leads "Artificial Intelligence for Cultural and Historical Reasoning" with collaborators including David Mimno (PPA board member), Ted Underwood (Startwords contributor), and Andrew Piper (early Humanities Council visitor through German).

Gregory Crane (Tufts University) is working on "Beyond Translation: Opening up the Human Record." Greg was the very first speaker we invited for the Digital Humanities Initiative back in c. 2012. His collaborator, David Smith, serves on our PPA board and participated in Ends of Prosody.

Meredith Martin

Executive Committee Member
FAD2024-2-245 copy2

Human and Machine Intelligence in Networks of Early Modern Print: Q&A with John Ladd

2025年12月9日 05:43

The CDH’s Humanities for AI initiative, launched in fall 2024, has presented a range of events, projects, and conversations exploring how humanistic values and approaches are crucial to the development, use, and interpretation of the field of AI, including this year’s Modeling Culture program.

Continuing our Q&A series where we share perspectives on the impact of AI on humanities scholarship, we welcomed John Ladd (Assistant Professor, Department of Computing and Information Studies, Washington & Jefferson College) to respond to some questions after his talk in September. In “Human and Machine Intelligence in Networks of Early Modern Print,” he investigated how artificial intelligence and other computational approaches can help us to understand the distant past.

Your work bridges early modern literature and computational methods. How does your research and teaching inform your understanding of “Humanities for AI”?

In my research, I frequently apply computational methods and digital tools to early modern book history and literature. I teach in an interdisciplinary computing program where I show students how to apply humanities methods and objects of study to data science and the history of technology. It’s this back-and-forth exchange, of using technology in the humanities and using the humanities to understand technology, that the digital humanities has long stood for and that can help us frame the humanities’ relationship to AI. Humanities scholars continue to demonstrate the value of interrogating AI ethically, critically, and in historical context, and I believe that we’re starting to see the ways large language models might be used, with sensible guardrails, as research tools as well as research subjects.

30825679554

Impressio Librorum / Book Printing, 16th-century engraving by Theodoor Galle after a drawing by Johannes Stradanus, c. 1550

In your talk, you noted that human-machine interaction has been happening since letterpress printing itself, using an engraving of an early modern print house to illustrate “the merger of different kinds of expertise” and drawing attention to the human labor behind seemingly “magical” new technologies. How does thinking about this historical precedent shape your approach to contemporary AI tools in humanities research?

I tend to focus on continuities between contemporary AI/large language models and a long history of technological change. The idea that AI is a revolutionary break from all previous technologies is a narrative that serves particular interests. I prefer narratives that put AI and LLMs into social and cultural contexts, like the recent frameworks of “AI as cultural and social technologies” and especially “AI as Normal Technology.” For humanities researchers and everyday users, it’s advantageous to think of AI as part of a long history of text technologies, going back to the printing press and before. This is why I emphasize LLMs’ ties both to decades of work in the digital humanities and to a broader view of the history of technology. This view helps us to see the many layers of human labor that have gone into AI, and the ways AI is still reliant on human expertise, just as other similar text technologies have been.

You’ve worked extensively on building research tools for humanists, from Network Navigator to the EarlyPrint Bibliographia. Now that LLMs have entered the picture, how has the landscape of digital humanities “tool building” changed? What ways have you found to engage with LLMs in your development work, with or beyond the chatbot?

An LLM itself is a new kind of tool, but it’s also a way to make tool-building more accessible and inviting. AI coding assistants can make it less intimidating to build a custom tool or website for a personal research project. There’s still a lot of value, and necessity, in learning how code works and how to make code work for you, but I am hopeful that LLMs will lower the barrier to entry and make the process more inviting. The novelist and experimental coder Robin Sloan has used the phrase “an app can be a home-cooked meal” to describe the empowering process of taking toolmaking into your own hands and building something just for yourself or for a small group. Instead of large-scale apps, these bespoke tools are so often the kinds that researchers really need, and LLMs may open the door for more such tools to be built.

John-Ladd-MC-1-10

John Ladd @ CDH, September 2025. Photo: Carrie Ruddick

Your work on local LLMs (with Melanie Walsh, et. al.) emphasizes privacy and sustainability for humanities AI research. Can you talk about why running models locally matters for humanistic inquiry? What are the technical and ethical considerations that led you to focus on this approach?

It’s essential for humanities researchers to find ethical ways of working with LLMs that respond to the many valid critiques of this technology. Working with LLMs locally reduces their environmental impact. Instead of processing your query at a massive data center, your own regular computer hardware can run the task, saving water and energy. The prompt and data also never leave your computer, making the whole interaction more private and avoiding corporate interests. While not every humanities research task can be run this way, many of the datasets and research questions that humanists use are at a scale that can be processed by a local LLM. The local models give you more control over the entire process, and they make the task more replicable in order for the work to be verified and reviewed. Both technically and ethically, I think local LLMs are a great path forward for many folks who want to work with this technology, and I’m working to make sure more people know this is an option!

In the case study you presented, examining whether early modern printers produced books on the same subjects over their careers, you combined text classification, network analysis, and data visualization. What did machine learning reveal that traditional bibliographic methods might have missed—and vice versa? How does a “mesoscopic” approach help you navigate moments where computational and analog approaches yield different insights?

Many early modern book historians have conducted studies of particular printers or groups of printers and publishers, examining their output to determine continuities in subject or genre. In my talk, I used the example of publisher Humphrey Moseley’s reputation as one of the preeminent publishers of literary and poetic works. This kind of close analysis is ideal for traditional bibliographic methods, but bibliography is also interested in the large-scale question of whether publishers like Moseley (and the printers who worked with him) are outliers or are part of a larger pattern. The machine learning methods I used are very good at finding patterns over tens of thousands of texts, which would be difficult or impossible to do by hand, and it’s how I was able to establish that printers do tend to have consistency in their outputs over time. As Chris Warren and Martin Mueller have each argued, what computational methods can do is let us connect the general pattern to the particular case, in this instance showing that the observed patterns for specific stationers carry through to larger trends in Restoration printing.

What is your greatest concern and biggest hope for the future of AI in humanities scholarship?

The biggest concern for me is the corporate logic that underlies the ways AI is being sold to and adopted by the general public, and this logic drives many of AI’s most troubling qualities: environmental problems, intellectual property problems, and labor problems. This is the ideology that attempts to set up AI as in opposition to or as a replacement for the arts and humanities. We should resist easy narratives that conclude that AI should write for you, make art for you, or do your job for you. Many humanities scholars have already begun the important work of pushing back on these narratives and making it clear that AI doesn’t stand apart from other technologies and shouldn’t be exempted from ethical and legal critique.

But by properly contextualizing AI within the history of text technologies, one hope I have is that more light can be shed on the amazing work being done with natural language processing and text analysis in the humanities. AI has made more people aware of text analysis and machine learning, to a degree I never would have thought possible a few years ago. Digital Humanities scholars who’ve been doing this work for years have a chance to share their expertise with a wider audience and help to craft new narratives around large language models that might move us past the current era of corporate chatbots. My colleagues in the Modeling Culture seminar are producing some incredible scholarship that merges LLMs and the humanities, and my hope is that more people will seek out and learn from this work.

About John Ladd

John Ladd is an assistant professor in the Department of Computing and Information Studies at Washington & Jefferson College. He teaches and researches on the use of data across a wide variety of domains, especially in cultural and humanities contexts, as well as on the histories of information and technology. Building on an English literature background where he studied the intersection between computational methods and early modern print culture, his work includes large-scale digital humanities projects, such as Six Degrees of Francis Bacon and Early Print. He has published essays and web projects on humanities data science and cultural analytics, computational bibliography, the history of data, and network analysis.

Subscribe to our newsletter

Related posts

Modeling Culture Talk
modeling-culture-john-ladd copy

Modeling Culture: New Humanities Practices in the Age of AI

A year-long seminar for faculty and grads with a public lecture series, culminating in a comprehensive and accessible curriculum for advanced humanities researchers.

modeling culture plain copy

Humanities for AI

Foregrounding the centrality of the humanities in the development, use, and interpretation of the field broadly known as “AI”

HUM-AI-LOGO_1_L343as1.original copy

More Humanities for AI Q&As

Humanities for AI
NnediOkorafor_-19
Humanities for AI
ted-chiang-qa-banner

AI and Ways of Seeing: Q&A with Lauren Tilton

12 November 2024

Lauren Tilton, Carrie Ruddick, Natalia Ermolaev

Humanities for AI
distant viewing copy

Checking in with Ed Baring: Motivation and Lessons behind Citing Marx

2025年10月7日 04:03

Edward Baring, Professor of History and Human Values, is co-leading an innovative project with the Center for Digital Humanities (CDH) that will transform how scholars understand the circulation and interpretation of Marxist ideas. "Citing Marx" aims to track published citations of the Manifest der Kommunistischen Partei (Communist Manifesto) and Das Kapital (vol. 1) within articles of Die Neue Zeit, a German socialist periodical, focusing on volumes published between 1891 and 1918. With the expertise of the CDH’s humanities research software engineers, computational tools are being developed to identify and analyze how Marx's key works were quoted and interpreted during this crucial period of socialist debate, with a goal to build reusable software for future applications. As the project progresses, we checked in with Baring to learn more about the origins of this collaborative project with CDH. He details the surprising challenges and possibilities of working with research software engineers and how this digital approach relates to his forthcoming book, Vulgar Marxism.

What moment or insight sparked the idea to digitally track citations of Marx's key works in Die Neue Zeit? How does this project build upon your broader research?

A couple of years ago, I was reading State and Revolution (1917), in which Lenin quotes multiple texts by Marx to promote his view about the right way forward for the Bolsheviks in the fall of 1917. I thought to myself that it would be really interesting to know whether these were just random quotes that he had combed through Marx’s texts to find, or whether he was participating in a longer tradition of Marx interpretation. The answer to this question would make a big difference for my understanding of his text. Twenty or thirty years ago, it would have been virtually impossible to answer this question, but I realized that with the right tools, we might be able to do so now. That’s when I approached Natasha at the CDH to start a conversation.

This project involves working closely with Humanities Research Software Engineers at CDH. What has surprised you most about this collaborative process? How has the experience of working with the CDH Research Software Engineers (Rebecca, Laure, Hao) shaped your understanding of what's possible versus what you initially imagined?

It's been a great pleasure working with the talented engineers at the CDH. They have had amazing ideas and have been very helpful. What surprised me was how badly calibrated my expectations were. The tools Rebecca, Laure, and Hao have developed or introduced me to can sometimes do extraordinary things that I would never have thought possible. For instance, it looks like we might be able to develop a tool that can identify Marx citations even in idiosyncratic and one-off translations. That blew my mind. But it is also the case that some seemingly simple problems can be taxing for digital methods, and you quickly recognize the value of working with a team to think through them.

A collaboration with the CDH also includes dedicated project management and project design support. Based on your collaborations with Mary, Jeri, and Ben, what perspective would you share with other researchers embarking on a collaborative humanities research project for the first time?

In the humanities, we really aren’t used to working in large teams, but that has been essential to the project so far. Mary, Jeri, and Ben have helped us work effectively together, making sure that we make progress. But it has been an adjustment for me to learn to use the various digital tools that structure collaborative work today and to fit into its rhythms.

One of the project's key goals is to create a reusable and extensible software package that could work with other journals, languages, and Marx works. As you've seen the pipeline develop, what possibilities are you most excited about for its application beyond Die Neue Zeit? What potential do you see for tools such as this transforming historical research methods?

It would be fantastic if we could expand to journals and books in other languages. In my research, I am very interested in transnational intellectual history. How do ideas travel beyond national borders, and how should we study that? And for these questions, it would be transformative if we could track the history of a particular quotation, compare the “Marx” being cited by intellectuals in different parts of the world, or even return to Marx himself to see which parts of his writings have been picked up by later writers, and which parts have been neglected. We would start to see the tradition with new eyes.

Your new book, Vulgar Marxism, will be released by the University of Chicago Press in December. In what ways does the research that will be enabled through your CDH collaboration connect with this and other earlier work?

Vulgar Marxism follows the development of Marxism on a transnational scale. I look at writings in German, French, Russian, Hungarian, Italian, Spanish, and Dutch. I argue that one of the main things holding Marxism as an intellectual enterprise together in so many countries is a common participation in the project of mass worker education. The research that will be enabled through the CDH would allow me to expand this research. We would be able to see how the ideas and quotations of Marx that were central to thinking through mass worker education were picked up by other figures. It would also allow us to track other types of intellectual connections and thus get a better sense of why and how Marxism came to be such a powerful intellectual force in so many places around the world.

Edward Baring

Professor of History and Human Values

ebaring2021-sm

Citing Marx

Identifying Marx citations within Die Neue Zeit

Built by CDH
marx

The Future of Storytelling in the Age of AI: Q&A with Nnedi Okorafor

2025年9月10日 08:10

Since fall 2024, the Center for Digital Humanities has led the “Humanities for AI” initiative through a series of events, projects, and conversations. We explore how humanistic values and approaches are crucial to developing, using, and interpreting the field of AI. As part of this effort, we publish a Q&A series with our guest speakers to further investigate perspectives on the impact of AI on humanities scholarship.

In April, we invited award-winning novelist Nnedi Okorafor to Princeton to discuss her new novel, Death of the Author, and the future of storytelling. In her “book-within-a-book,” a disabled Nigerian-American woman writes a successful sci-fi novel where “androids and AI wage war in the grown-over ruins of human civilization.”

During the conversation, Nnedi expressed hope that storytellers of all kinds understand the necessity of process, particularly experience as process, and are not “seduced” by the convenience of AI. Fascinated by tech and robots (she has several in her home!), and the ways they can help people with disabilities, she is optimistic that great stories will shine through the AI slop.

In your opinion, has speculative fiction influenced the rise of generative AI?

Absolutely. AI was imagined in science fiction (which is part of speculative fiction) narratives before it was created. First comes thought, then comes action. That’s not even my opinion, it’s fact.

nnedi-class-photo

Students pose for a photo with Nnedi Okorafor (at center) in the first year seminar Speculative Fiction: From Pygmalion to ChatGPT (FRS 142). (Photo: Carrie Ruddick)

You’ve described Africanfuturism as “skew[ing] optimistic.” What fuels that optimistic view for you?

That line is poking at the way Africa has been viewed from a Western perspective: as a place of poverty, disease, and war. The Africanfuturist perspective tends to be from the perspective of Africans, not the friends of or someone interested in Africa. The stories are mainly directly connected to those who have skin in the game.

Africanfuturism skews balanced, nuanced, from a place of knowledge and respect. It understands African culture, people, politics, the land, and futures as a whole. Africanfuturism doesn’t tend to romanticize Africa out of guilt or wallow in tragedy because such stories sell well in the West. Africanfuturism, whether it's a dark, bloody type of story or a whimsical, flowery type of story or whatever, mostly has Africa’s back because it is African.

What is your greatest concern for the future of AI for the humanities?

My greatest concern for AI is what’s already been happening. It’s that the models used to create it and train it contain the DNA of patriarchy. Patriarchy is about control, a need to be beholden to one’s creator, to be a reflection of its creator, and to be fed by its creator. Patriarchy has no respect for those it feels it dominates, hence the shameless theft of copyrighted works. All one needs to do is look at who created the technology. It didn’t have to be this way. Right now, AI software is not about making humanity greater or fixing tough problems; it’s about making money. All this will only lead in one direction.

What is your biggest hope for the future of AI for the humanities?

My biggest hope is that those who see these problems turn the ship in a better direction. Nothing is inevitable. We are in full control of how this goes. And not just the creators—the users, as well.

NnediOkorafor-1

Nnedi and Chika Okeke-Agulu in conversation. April 2025. (Photo: Ali Nugent)

About Nnedi Okorafor

Nnedi Okorafor — the global leader of Africanfuturism, and an international literary superstar — is an award-winning novelist of science fiction, fantasy, and magical realism for adults and young readers. Among her many works are the Binti trilogy, the Akata Witch books (both optioned for the screen), and her latest, Death of the Author, which George R.R. Martin calls "[h]er best work yet... about fame, culture, the power of story, the writer’s life... and robots.” She is the author of Black Panther: Long Live the King, and she authored the spinoff graphic novel, Wakanda Forever, which became a Hollywood blockbuster. Okorafor is the winner of the Hugo, Nebula, World Fantasy, Locus, and Lodestar Awards. She holds a PhD in Literature, two Master’s Degrees (Journalism and Literature), and lives in Phoenix, Arizona, with her daughter Anyaugo.

The talk in April was supported by the Belknap Fund in the Humanities Council and co-sponsored by the Africa World Initiative, the Program in African Studies, the Princeton African Humanities Colloquium, and the Princeton Public Library.

Subscribe to our newsletter

The Incompatibilities Between Generative AI and Art: Q&A with Ted Chiang

2025年8月13日 03:15

This past year, the Center for Digital Humanities celebrated its tenth anniversary with the theme “Humanities for AI.” Through this series of events, projects, and conversations, we explore how humanistic values and approaches are crucial to developing, using, and interpreting the field of AI.

As part of this initiative, we were thrilled to welcome award-winning writer Ted Chiang to Princeton on March 18 to present his talk “The Incompatibilities Between Generative AI and Art” with support from the AI Lab, Humanities Initiative, and Princeton Public Library. In this talk, he expanded on points from his essay “Why A.I. Isn’t Going to Make Art” in The New Yorker (August 2024). To delve deeper into topics such as artistic self-expression and the role of choice, as well as the tension between art and commerce, we invited Chiang to respond to a set of questions related to AI and its impact on humanities scholarship.

What does Humanities for AI mean to you?

The goal of universities is to produce graduates who can be more than just workers at widget factories, and studying the humanities is an essential part of that. Capitalism's goal is to turn the entire world into a widget factory, and AI is a powerful tool for achieving that. So I see Humanities for AI as an attempt to wrest the technology from the hands of capitalism and find uses for it other than extracting economic value from people.

In your opinion, has speculative fiction influenced the rise of generative AI?

Not directly. What we think of as generative AI only started around 2020 with programs like GPT-3 and DALL-E, and it wasn't something that even people working in AI had anticipated; they simply discovered that their programs had some unexpected capabilities and decided to lean into them. While there have been science-fiction stories about machine-generated fiction and art in the past — some of which seem eerily prescient in retrospect — I don't think anyone working in AI was aware of them or drawing inspiration from them.

If we zoom out from generative AI to consider AI more broadly, then I'd say speculative fiction has had a big role. The idea of the singularity — a point in time when machine intelligence exceeds human intelligence — was popularized by the science fiction writer Vernor Vinge. Vinge had an enormous influence on the Extropian community in the 1990s, and that community influenced AI research in the 2000s. I think it's also important to note that it was a non-fiction essay of Vinge's that was most influential, rather than his fiction. The practice of presenting fictional scenarios as non-fiction has now become the norm in Silicon Valley.

18-3-2025-Ted-Chiang-8

Chiang visits Dr. Naydan's first-year seminar, "Speculative Fiction: From Pygmalion to ChatGPT." Spring 2025. (Photo: Carrie Ruddick)

It's hard for me to imagine a scenario where AI helps writers do good work.

Do you envision scenarios where AI positively influences creative writing? What conditions do such possibilities require?

It's hard for me to imagine a scenario where AI helps writers do good work. Writing involves very little overhead; it's not like making a movie, where your budget determines what possibilities are available to you. You can write with a pencil and paper and do pretty much the same work as with a typewriter or a word processor. When you write, your medium is sentences, and I don't know what it would look like to have a technology that gives you greater control over sentences. Because of that, writing is relatively unaffected by advances in technology. This is also why I don't think the word processor has had a significant impact on creative writing; whatever changes we've seen in the novel over the last fifty years have probably been due to other cultural factors. I've read the claim that novels have gotten longer because of word processing, but I think even that has more to do with shifts in the publishing industry than with the increased ease of typing.

There might be certain creative possibilities opened up by explicitly using LLMs to write about LLMs, but I don't see that becoming a widespread practice. There's a form of visual art called scanography, which relies on the effects made possible by digital scanners. Without intending any insult, I think it's fair to say that scanography is a niche genre. I'd say that generative AI has comparable potential for creative writing.

Using ChatGPT to write your essays is like bringing a forklift into the weight room.

What advice do you have for college students who face the prospect of using generative AI in their studies?

Everyone should think carefully about using generative AI simply because the technology is built on environmental destruction, labor exploitation, and IP theft. College students should think extra carefully about it because, even if those other issues were magically resolved, using generative AI is largely incompatible with the purpose of education. In the talk I gave, I said, "When you’re a student at a university, you should think of yourself as an athlete in training, and the job you'll do after you graduate is the sport you will compete in. You don’t know specifically which sport you will play, and neither do your professors. What your professors do know is that strength training will help you. That’s what essay writing is; it’s strength training for the brain. Using ChatGPT to write your essays is like bringing a forklift into the weight room; you are never going to improve your cognitive fitness that way." Let me expand on that. Building strength requires exertion; if anyone offers you an exercise program that involves no exertion at all, you know it's not going to be effective. The improvements that come from doing cognitive exercise are not as rapid as those that come from physical exercise, but they are just as real. Writing an essay is hard because it forces you to use your brain in ways you haven't before, and that is precisely why it's useful. Your job is not to turn in completed assignments; it's to learn how to think. Turning in completed assignments can help you learn how to think, but only if you're the one who completed them.

Your job is not to turn in completed assignments; it's to learn how to think.

TedChiang_51

Chiang signing books at the end of the talk. March 2025. (Photo: Ali Nugent)

You ended your talk with a call for people to go out and create something meaningful to themselves or someone else. What is a creation you have read, seen, or experienced recently that has been meaningful to you?

The TV series ANDOR really impressed me. I don't particularly care about the STAR WARS universe; the only reason I tried this series was because Tony Gilroy was involved. In terms of craft, the series is a marvel; the dialogue, performances, production design, and music are all excellent. But completely separate from that, I think it's a remarkable depiction of what's involved in fighting fascism. Critics have said that one reason for the original STAR WARS' popularity was that, in the post-Vietnam era, it allowed Americans to feel good about themselves again by reminding them of "just wars" like the American Revolution or World War Two. What ANDOR does is more complicated and subversive. In the original movie, you could read the Empire as being a stand-in for Nazi Germany, but in ANDOR, it's hard for me to read the Empire as being anything other than a stand-in for the United States.

Ted Chiang-1

Chiang speaking to a packed McCosh Hall. March 2025. (Photo: Ali Nugent)

About Ted Chiang

Ted Chiang's fiction has won four Hugo Awards, four Nebula Awards, six Locus Awards, and the PEN/Malamud Award and has been reprinted in The Best American Short Stories. His first collection, Stories of Your Life and Others, has been translated into twenty-one languages, and the title story was the basis for the Oscar-nominated film Arrival. The New York Times chose his second collection, Exhalation, as one of the 10 Best Books of 2019. As a 2023 TIME100 Most Influential Person in AI, Chiang is described as “perhaps the world’s most celebrated living science-fiction author.”

Subscribe to our newsletter

Why A.I. Isn’t Going to Make Art — The New Yorker

By Ted Chiang. To create a novel or a painting, an artist makes choices that are fundamentally alien to artificial intelligence.

videoframe_1343

Life Is More Than an Engineering Problem — Los Angeles Review of Books

Julien Crockett speaks with Ted Chiang about the search for a perfect language, the state of AI, and the future direction of technology.

https___assets.lareviewofbooks.org_uploads_lifecycle-softwareobjects-web__85640

Ted Chiang explores “incompatibilities” between generative AI and art

By Allison Gasparini, AI Lab and Center for Statistics and Machine Learning

Ted-Chiang banner

Humanists and Technologists Join Forces to Advance Historical Text Recognition and Research

2025年8月12日 03:52

Forging the future of text recognition for research focused on historical manuscripts, the Source Codes of the Past (SCOOP) conference connected an international network of experts at the Institute for Advanced Studies (IAS) in Princeton in June 2025. 

Collaborative in every regard, the conference was organized by Professor of History Helmut Reimitz, Center for Digital Humanities (CDH) Postdoctoral Research Associate Christine Roughan, and History Ph.D. candidate Lucia Waldschuetz, along with Professor of Medieval Studies at the IAS Suzanne Akbari and CDH Executive Director Natalia Ermolaev.  The launch of this network was a joint venture of the IAS, Princeton Humanities Initiative (PHI), the CDH, the Manuscript, Rare Book, and Archival Studies Initiative (MARBAS) and the Institute for Medieval Research at the Austrian Academy of Sciences. Along with the organizers, the Center for Collaborative History, Department of Classics, the Seeger Center for Hellenic Studies, the Program in Medieval Studies and the Committee for the Study of Late Antiquity also joined in sponsoring the workshop. 

“Fostering collaboration is a major goal of the Princeton Humanities Initiative, and SCOOP brings together teams that are working across institutions, disciplines, and countries to advance our ability to learn about the past and inform our future.” 
— John Paul Christy, Executive Director, Princeton Humanities Initiative
IMG_1579

Christine Roughan opens the inaugural SCOOP conference at the Institute for Advanced Studies (Photo: Kirstin Ohrt)

Humanities and social science scholars, software engineers, and machine learning researchers—some wearing several hats—pooled their expertise, mutually informing one another’s understanding of automatic text recognition (ATR) and handwritten text recognition (HTR) technologies. This intensive think tank centered the challenge of adapting existing technologies for diverse scripts, textual traditions, and manuscript structures, especially for understudied languages and materials. Given the long-running separate streams of time and resource investment devoted to developing these projects, the convening of project leaders to share successes and challenges represents an efficiency windfall.

“AI and machine learning tools for text recognition are transformative—not only for deciphering individual manuscript traditions, but for enabling large-scale, comparative research that brings diverse cultural histories into meaningful conversation with one another,” said Ermolaev. “As these technologies become more sophisticated, it is essential that humanists are at the table, helping shape how these tools are designed and deployed. Scholars of historical languages and cultures bring deep knowledge that is critical to developing more accurate, ethical, and inclusive AI systems. The long-standing collaboration between humanists and technologists in digital humanities is more urgent than ever, as we work together to ensure that the cultural data of the past informs the technological futures we’re building today.”

“The long-standing collaboration between humanists and technologists in digital humanities is more urgent than ever, as we work together to ensure that the cultural data of the past informs the technological futures we’re building today.” 
— Natalia Ermolaev, Executive Director, CDH

Presenter Tobias Hodel (University of Bern) underscored the importance of leaning into the ATR/HTR community of stakeholders and experts.  Having bested the tech hurdle, he said, the critical question “what’s next?” requires a collaborative answer.  Achim Rabus (University of Freiburg) agreed that discussion between parties is imperative, as is sufficient training.  He noted the debilitating gap between those with technological and humanistic expertise, underscoring the importance of elevating training for both to arrive at maximum usability. Evaluating output anomalies of a program, for example, requires a collaborative examination when faced with the recurring problem: “We don’t know if it’s a bug or a feature.” Among definitive features, Achim pointed to strides and further opportunities in smart transcription, which automatically interprets and expands abbreviations in original text. 

IMG_1729

Benjamin Kiessling presents “Large Multilingual ATR Models and Humanities Practice - Conflicts and Pathways” (Photo: Kirstin Ohrt)

Benjamin Kiessling (Paris Sciences et Lettres University) campaigned for new text reader models to resolve the lingering problem that bespoke models cater to niche research questions. What’s needed, he said, is a way to align output with research questions and allow models to become more interchangeable or generalized. With this goal in mind, Kiessling has developed PARTY, or Page-wise Recognition of Text. 

The workshop illustrated that when stakeholders work together across functions and areas of expertise, the boon for scholarship can be exponential. Using technology to make manuscripts accessible to scholars in languages unfamiliar to them allows for connections heretofore left on the table. This democratization of knowledge, said Achim, is game-changing. 

Launching a Graduate Student Text Recognition Technology Boot Camp

The conference included a comprehensive three-day ATR/HTR Training Workshop designed to train graduate students and scholars with various experience levels and backgrounds in text recognition technology. Led by instructors Helmut Reimitz, Christine Roughan, Anna Michalcová, Martin Roček, and Jan Odstrčilík, the workshop provided a structured progression from technical fundamentals to practical application. 

“We took care to structure the workshop so that it would offer training relevant to researchers working in any historical written tradition, because the underlying methods of ATR are not limited by language or discipline,” shared Roughan. “We were pleased to be joined by participants from history to NES, from art & archaeology to music as a result.”

The first day covered introductory concepts, including how HTR/ATR works technically, available ATR tools, and the basics of key platforms such as Transkribus and eScriptorium. The second day delved into the practicalities of using such platforms, covering topics such as layout and model training, data formats, and methodological considerations for both Latin and non-Latin scripts. The last day of the training workshop turned to the practicalities of using text recognition tools and outputs in research: HTR model evaluation, data sharing through platforms such as Zenodo and GitHub, and techniques for developing custom models using existing published models. Supervised hands-on practice sessions were conducted throughout to reinforce learning objectives.

“Knowing how to use the tools is just step one,” said Roughan. “The scholarly community continues to publish a wealth of data and models – knowing how to interact with and build upon that foundation empowers people to get the most out of research using text recognition methodologies.”

“Knowing how to use the tools is just step one. The scholarly community continues to publish a wealth of data and models – knowing how to interact with and build upon that foundation empowers people to get the most out of research using text recognition methodologies.” 
— Christine Roughan, Postdoctoral Research Associate, CDH
SCOOP-CDH-Carrie-Ruddick-07

Training Workshop organizers from left to right: Christine Roughan, Helmut Reimitz, Jan Odstrčilík, Martin Roček, and Anna Michalcová. (Photo: Carrie Ruddick)

SCOOP 2.0

By all accounts, the conference exceeded its goals. “It was an amazing and extremely encouraging start for the network,” said Reimitz. “Everyone agreed that a platform for exchanging ideas between AI experts, computer scientists, and humanities scholars is urgently needed in order to take the application of HTR in the humanities to the next level.”

Paving the way to that next level, a second SCOOP workshop is already in the making. Hosted by the Austrian Academy of Sciences, SCOOP will reconvene in Vienna in summer 2026.

In the meantime, members of the SCOOP network are working to establish a digital communication platform to evolve conversations on the implementation of text recognition tools in diverse projects involving various languages, scripts, layouts, and visualizations in original manuscripts and documents.  Furthermore, the forum facilitates shared experimentation and modeling. “As an important focus, we agreed in Princeton on the question of interoperability issues and experiences with large established engines and smaller research groups working on under-resourced scripts and languages,” said Reimitz.

SCOOP partners, the Princeton Humanities Initiative, Center for Digital Humanities, Institute for Advanced Study at Princeton, and the Austrian Academy, are committed to carrying forward the momentum of this inaugural SCOOP conference.  “Fostering collaboration is a major goal of the Initiative,” said Christy, “and SCOOP brings together teams that are working across institutions, disciplines, and countries to advance our ability to learn about the past and inform our future.”

“It was an amazing and extremely encouraging start for the network. Everyone agreed that a platform for exchanging ideas between AI experts, computer scientists, and humanities scholars is urgently needed in order to take the application of HTR in the humanities to the next level.”
— Helmut Reimitz, Professor of History

To join the SCOOP network or learn more: scoop@oeaw.ac.at, croughan@princeton.edu, anna.michalcova@oeaw.ac.at.

First-Year Students Explore AI Through the Lens of Speculative Fiction—Featuring Visits from Sci-Fi’s Literary Superstars

2025年5月14日 00:52

What does it mean to be human in the age of artificial intelligence? As emerging technologies reshape daily life, DH Project Manager Mary Naydan *23 (English) turned to literature for answers. Her first-year seminar, Speculative Fiction: From Pygmalion to ChatGPT (FRS 142), examined how imaginative storytelling in science fiction, fantasy, and horror has long anticipated today’s AI debates.

The course began with classics like Mary Shelley’s Frankenstein (1818), prompting students to consider the ethical responsibilities of creators and ask, “Who should be held liable when a creation causes harm?” Despite its age, Shelley’s novel remains strikingly relevant in discussions of today’s rapidly evolving and largely unregulated AI landscape.

In the same week, students studied the origins of the “Frankenstein Complex,” a term coined by I, Robot author Isaac Asimov to describe fears of AI turning against its creators. They examined how such anxieties shape not only fiction but also real-world discourse, including a 2023 open letter from tech leaders calling for a pause in AI development.

By tracing concerns about artificial intelligence from Ovid’s account of the Pygmalion myth (8 CE) to contemporary works by authors Ted Chiang and Nnedi Okorafor, the course equipped students with the tools to think critically about AI. It empowered them to engage with the technology thoughtfully and be aware of both its possibilities and its ethical implications.

“Today’s students are inundated with AI tools, whether they realize it or not… and it is getting harder and harder to opt out,” says Naydan. Her goal is to help students make informed choices about the role of AI in their lives, as students and beyond.

This spring’s agenda was immersive and certainly one to remember.

Fostering AI literacy through critical engagement

Students built AI literacy through a series of hands-on labs designed to demystify how these systems work and encourage critical reflection. In the first lab, they used Google’s Teachable Machine to train a simple computer vision model to recognize yoga poses, observing firsthand how supervised machine learning relies on data patterns rather than human-like understanding. For example, a model trained only on right-legged tree poses struggled to recognize the same pose with the left leg, revealing the limits of generalization and challenging the tendency to anthropomorphize AI. Subsequent labs explored AI bias and emotional mimicry.

yoga-poses

Students training a model to recognize yoga poses using Teachable Machine. (Photos: Mary Naydan)

Students also honed their prompt engineering skills by tasking ChatGPT with generating a hypothetical tenth story for I, Robot, then critically evaluating the outcome. In another session, they explored Sudowrite—an AI tool designed for creative writers—to examine the potential advantages and limitations of AI-assisted storytelling.

Later in the semester, during a unit on climate, students visited Princeton’s High Performance Computing Research Center (HPCRC), which houses the campus’s AI research infrastructure. The trip offered a concrete sense of AI’s environmental footprint and how Princeton’s graphics processing units (GPUs) compare to the scale and impact of tech giants like Meta, Tesla, and Google. As a LEED-certified facility, HPCRC demonstrates one model of sustainable infrastructure, and it left an impression, not least because students were fascinated by its eco-friendly lawn care: sheep to mow the grass!

“Visiting the HPCRC was a fascinating experience that deepened my understanding of high-performance computing and its role in cutting-edge research,” says Chinmayi Ramasubramanian ’28. “I have often used the Adroit computing cluster in class, and coming in, I was excited to see the actual hardware that powered my computations.”

class-field-trip

Students on a tour of Princeton’s High Performance Computing Research Center. (Photos: Carrie Ruddick)

In preparation, students read Kate Crawford’s Atlas of AI chapter “Earth,” which challenges the illusion of a “green” tech industry. They learned that AI systems rely on massive computational power and energy, often hidden behind the metaphor of “the cloud.” The visit, paired with discussion, helped students connect abstract environmental critiques to the tangible materiality of AI infrastructure and to consider how AI’s growth drives both energy consumption and resource extraction globally.

“Seeing the physical hardware up close and learning about how everything runs made the complexity of these systems feel so much more real,” states Quinn Challenger ’28.

class-trip-hpcrc

Exploring rooms and machines at Princeton’s High Performance Computing Research Center. (Photos: Carrie Ruddick)

Discussing AI with award-winning fiction writers

A highlight of the course was the opportunity for students to engage directly with two of today’s most influential speculative fiction authors, Ted Chiang and Nnedi Okorafor, both of whom visited the class before giving public lectures as part of CDH’s Humanities for AI series. Naydan emphasized the value of these visits: “It is one thing to talk abstractly about the relationship between AI and creative writing; it is another to hear from creatives directly—what they think about AI, how they use it (or choose not to), and why.”

18-3-2025-Ted-Chiang

Ted Chiang signs a student's book during the class. (Photo: Carrie Ruddick)

To prepare for the visits, students read and discussed each author’s work. Chiang’s work (The Lifecycle of Software Objects and nonfiction essays in the New Yorker) prompted conversations about the ethical implications of developing AI within capitalist systems and ChatGPT as an impediment to developing “cognitive fitness” in college. Okorafor’s work (Death of the Author and “Abracadabra”) inspired discussion on AI’s positive potential to support people with disabilities and improve healthcare.

“There was an electric atmosphere to the conversations,” Naydan recalls. “I was so impressed by the students’ willingness to think deeply, take risks, and have fun with ideas.”

18-3-2025-Ted-Chiang-8

Students pose with Ted Chiang in class. (Photo: Carrie Ruddick)

The visits left meaningful impressions on the students.

“Being a student-athlete, [Okarafor's] story about her experience as an elite athlete who turned a career-ending injury into a path toward writing resonated with me,” says Nathan Banos ’28. “It reminded me that it’s not about what happens to you, but how you respond. Their visits to our classroom were the perfect way to immerse ourselves in the world of both speculative fiction and AI, creating a unique learning environment and lasting memory for me.”

Stephanie Ko ’28 enjoyed the thoughtful balance in the course’s exploration of AI’s technological foundations and its human-centeredness, to which the visits added depth. “I was honestly awed to see how Ted Chiang simply sees and fulfills the need to be a well-informed member of the perpetual discussion regarding the future of AI,” she says, “which is a trait that I think many of us will aspire to develop and carry with us through our education at Princeton.” Of Okorafor, she shares, “She was incredibly transparent that her hopes for AI stem from her personal struggles and frustrations, and I think this perspective was a refreshing reminder that the development of AI and the role we allow it to have is a fundamentally human problem.”

nnedi-class-photo

Students pose for a photo with Nnedi Okorafor (at center). (Photo: Carrie Ruddick)

What comes next?

“If my students take just one thing away from my class, I hope it is the idea that AI is not objective, perfect, or neutral,” says Naydan. “It is fallible, because it’s only as good as its training data; it encodes and perpetuates human biases; and it is complicated in how it can help and hurt humanity, often simultaneously.”

She also stated that by the end of the course, there was no single answer to how literature responds to technological change in our society. Instead, students encountered a range of creative approaches: Okorafor imagines the technologies she hopes society will build; Kai-Fu Lee and Chen Qiufan use “scientific realism” to depict core concepts of machine learning; Terence Taylor explores the exploitation of labor in AI-driven workplaces through horror; and Philip K. Dick uses anthropomorphized AI to probe deeper moral and existential questions about what it means to be human.

Ultimately, the course underscored that literature has long been a tool for making sense of the world. As AI continues to reshape society, storytelling not only helps us understand emerging realities—it also offers a means to imagine and influence what comes next.

CDH offers many unique courses for undergrads. View past and future courses here.

Be sure to check out Dr. Naydan's next available course, Project Management 101back by popular demand–at the next Wintersession on January 20, 2026. Browse last year’s slide deck here. Don’t hesitate to sign up when registration opens in December–there was a waitlist last year!

Related posts

Ted Chiang explores “incompatibilities” between generative AI and art

On March 18, the multi-Hugo-award-winning science fiction author and 2023 TIME100 Most Influential Person in AI lectured at Princeton University, laying out what he described as...

Ted Chiang-5
Course
tgn324

AI and Ways of Seeing: Q&A with Lauren Tilton

This year, the Center for Digital Humanities celebrates its tenth anniversary with the theme “Humanities for AI.” Through this series of events, projects, and conversations, we explore how humanistic values and approaches are crucial in developing, using, and interpreting the field of “AI.”

As part of this initiative, we welcomed Lauren Tilton (Director, Distant Viewing Lab; E. Claiborne Robins Professor of Liberal Arts and Digital Humanities, Department of Rhetoric and Communications, University of Richmond) to Princeton to present her lecture “Distant Viewing: AI and Ways of Seeing” on October 21. In this talk, she introduced the concept of distant viewing and demonstrated how AI is animating humanistic inquiry through examples from visual culture. To expand upon her reflections on “how digital humanities, data science, and the larger field of AI could reshape the world together,” we invited Tilton to respond to a set of questions related to AI and its impact on humanities scholarship and her work in visual culture and data science.

tilton-1

Lauren Tilton presents “Distant Viewing: AI and Ways of Seeing” at Friend Center, Princeton University, on October 21, 2024. Photo by Shelley Szwast.

What does “Humanities for AI” mean for you?

I find the idea to be a nice broad framing that centers on the role of the humanities in shaping AI. AI is often celebrated through a capitalistic, economic, and technological lens that centers on profit, innovation, and progress. Less frequently do we ask: Why are we building X and what is the social impact? Who is profiting, and who isn’t? Innovation according to whom? Progress from whose perspective? The humanities, particularly areas such as American Studies and Science and Technology Studies (STS), offer important frameworks, theories, and histories to address these questions. The humanities also allow us to imagine more creative possibilities for our AI future.

You have been developing the concept of “distant viewing” for some time. How is this project changing in light of developments in generative AI (genAI)?


Distant Viewing is a theory of how the computational exploration of digital images through the application of computer vision works, and why it is needed.


We think Distant Viewing (DV) is key to developments in genAI. Computer vision is critical to multimodal large language models (MMLLMs). DV offers a theory for understanding computer vision, a way to identify and interrogate the ways of seeing that we are building in these algorithms engaged in the act of distant viewing. Unpacking this component is vital to better understanding how MMLLMs are working. They are trained in specific ways of seeing (such as training sets that feature photography from the last two decades and classification of people). They encode specific visual cultures and then generate from there.

We see this now with MMLLMs. When Midjourney launched, they produced a specific vision of women: young, busty, and thin with flowing hair. They looked like a combination of anime (in style) and Western beauty standards (in physical features). As one thread Reddit r/midjourney discussed, the contributors were struggling to generate an "[u]gly, plain-faced, ordinary, awkward features, hideous, unattractive, gross…” (actual prompt) woman, for users found the tool kept generating “pinup-grade attractive” women. A particular idea of beauty was baked into the MMLLM, reiterating problematic ideas of “women” and female beauty through long-critiqued tropes.

At the same time, I think distant viewing offers us a theory for how we can continue to build different kinds of algorithms. The theory offers a way to critically (not necessarily negatively, but critique as careful analysis) read algorithms and imagine other ways of viewing what we want to design. DV, therefore, opens creative possibilities for us.

distant viewing 2

Distant Viewing: AI and Ways of Seeing, Lauren Tilton, Oct 21, 2024, Princeton.

Pixels convey meaning only when put into context with one another by mimicking the act of viewing objects, people, and environments directly through the human visual system.

You work at the intersections of visual culture and data science. What fruitful and exciting avenues of research is AI introducing for our understanding of visual semiotics, and media studies more broadly?

The Distant Viewing Lab primarily studies photography, television, and film. In some cases, we are focused on a direct question that animates media studies such as narrative arc, visual style, and changes in form and messages over time. AI strategies have included using algorithms out of the box, such as image segmentation and image embedding, and we’ve also designed our algorithms, such as a shot boundary detector bundled inside the Distant Viewing Toolkit. For example, we are working on a large corpus of TV sitcoms to explore the “genre” over several decades. We started this work several years ago with our article on Bewitched and I Dream of Jeannie in Cultural Analytics with scholar and LA Review of Books film editor Annie Berke (make sure to follow her brilliant commentary on all things TV and film!). The work became the basis of a chapter in Distant Viewing (MIT Press, 2023, Open Access). We are now looking at elements such as pacing, characters on-screen, shot type, and other features to look at developments in the genre over time. Recent developments in AI and processing power make the ability to analyze at this scale possible.

All the talk right now is about LLMs. MMLLMs offer exciting new avenues for us to engage in media studies. We use them for our sitcom work, and another significant part of our work focuses on the public humanities. We are deeply committed to supporting access and discovery of cultural heritage collections to help open the rich histories yet to be told. We have also been helping them assess AI needs, where AI might support institutional goals. One project is ADDI: Access and Discovery of Documentary Images, where we assessed the power of specific AI approaches to open access across five collections with approximately 300,000 photographs. Working with the nation’s cultural heritage institutions, such as the Library of Congress and the Smithsonian, is a real privilege. I believe in our nation’s institutions, particularly the power of cultural heritage institutions, to tell new, silenced, and unexpected stories about who “we” are and where “we” have been. Being a part of supporting these institutions’ commitments to open access and serving such a wide range of publics has been a highlight of my career.

You direct the Distant Viewing Lab, as well as the Center for Liberal Arts and AI (launching fall 2025), both at the University of Richmond. How has running a “lab” influenced how you see collaborative scholarship in the humanities? And how will this new research center expand on that?

The experience of building DH projects, co-authoring articles and books, as well as running a lab and soon a center has instilled in me the power of collaborative scholarship. I find collaboration to be enriching. We’ve generated more nuanced, interdisciplinary, and cutting-edge scholarship, and I’ve had the opportunity to learn from brilliant colleagues. Collaboration is now at the center of CLAAI. Mainly, when one works at smaller institutions or institutions with limited resources, collaboration is one way to be stronger than the sum of our parts. CLAAI is built on this idea. By bringing together the strengths of each of the 15 undergraduate-focused small liberal arts colleges that are a part of the Associated Colleges of the South, I think we will be better positioned to expand and amplify cutting-edge teaching and research in AI. Undoubtedly, one needs to develop skills such as patience, openness, and trust to collaborate, but it has been worth it!

distant viewing lab

Lauren Tilton and Taylor Arnold, co-directors of Distant Viewing Lab and co-authors of Distant Viewing: Computational Exploration of Digital Images.

You are currently the co-president of the Association for Computers and the Humanities (ACH). What are some ways that ACH can play a role in this AI moment? 

We are currently working in several areas and welcome the community’s advice! One area is access to data. ACH has supported the University of California at Berkeley’s Samuelson Law, Technology & Public Policy Clinic and Authors Alliance's incredible work to help researchers access digital materials such as ebooks and DVDs subject to digital rights management, or DRM. (For more, Quinn Dombrowski and I wrote an article about the work of ACH in Digital Studies/ Le champ numérique). We have a new AI + DH Working Group led by Quinn Dombrowski and CDH’s Meredith Martin! The group will meet during ACH2025 to discuss the next steps, and all conference attendees are welcome to join. Another area that we are turning to is the environmental impact of AI, and we hope to launch a working group on this topic soon.

What is your greatest concern and biggest hope for the future of AI for humanities scholarship?

So many concerns and hopes!

One concern is the environmental impact of genAI, and our scholarship being a part of the problem rather than solution. There is a lot of attention on this issue right now, and we need to continue to make this issue central. I’m hopeful we will find a more sustainable way forward. At a minimum, there seem to be significant economic and corporate incentives for companies like Microsoft to find a better energy solution (assuming they want to stay in business for decades and have a market). This may be where renewable energy flourishes, even if significantly driven by significant power users and corporate logic.

One of my biggest hopes is that we continue to use AI in ways that ignite and support students’ and colleagues’ interests in the humanities. At their best, the humanities offer theories, histories, and ways of being that make us aware of how people have experienced and are experiencing the world. There is also a reality that students are often very interested in AI, for it is a technology that is shaping their every day. I see AI as one way we can animate our curiosity about the past and present. Digital humanities offer an exciting intersection well-positioned to continue at the forefront of these possibilities.

CDH-Lauren-Tilton_20241021-0011

Lauren Tilton at Princeton University on October 21, 2024. Photo by Shelley Szwast.

Lauren Tilton is the E. Claiborne Robins Professor of Liberal Arts and Professor of Digital Humanities in the Department of Rhetoric and Communications at the University of Richmond. She also directs the Distant Viewing Lab. Her research focuses on analyzing, developing, and applying digital and computational methods to the study of 20th and 21st-century documentary expression and visual culture. Her primary scholarship incorporates theoretical and methodological approaches from American Studies, Media Studies, Public Humanities, and Data Science. She is committed to interdisciplinary, collaborative, open-access scholarship. She earned a PhD in American Studies from Yale University. She is currently the co-president of the Association for Computers and the Humanities (ACH), the scholarly association for digital humanities in the United States.

Announcing the 2024–25 Collaborative Research Grantees and Projects

2024年7月3日 09:29

The Center for Digital Humanities is thrilled to announce the three project proposals awarded 2024–25 Collaborative Research Grants.

This year’s projects are “Princeton Open HTR Initiative: Creating Infrastructure for Modeling Historical Texts” (Marina Rustow, Khedouri A. Zilkha Professor of Jewish Civilization in the Near East, Professor of Near Eastern Studies and History, Director of the Geniza Lab; Helmut Reimitz, Shelby Cullom Davis ’30 Professor of European History, Professor of History; Christine Roughan, Postdoctoral Research Associate, The Center for Digital Humanities and Manuscript, Rare Book and Archive Studies), “Marxism’s Marx” (Edward Baring, Associate Professor of History and Human Values), and “Music Theory in the Plural” (Anna Yu Wang, Assistant Professor of Music; Jürgen Hackl, Assistant Professor of Civil and Environmental Engineering).

Grantees will work with a team of CDH Research Software Engineers and Project Managers to develop methods and software to aid their research. “We are excited by the focus on multilingual research questions, as well as the engagement with AI approaches,” says Jeri Wieringa (Assistant Director, CDH). “These six-month partnerships enable us to contribute to a wide variety of faculty projects while also developing more generalized methods in the areas of text reuse and large language models (LLMs).” This year’s application process prioritized projects exploring opportunities and/or limits of AI for humanities research.

Princeton Open HTR Initiative: Creating Infrastructure for Modeling Historical Texts

In their proposal, Marina Rustow (Khedouri A. Zilkha Professor of Jewish Civilization in the Near East, Professor of Near Eastern Studies and History, Director of the Geniza Lab), Helmut Reimitz (Shelby Cullom Davis ’30 Professor of European History, Professor of History), and Christine Roughan (Postdoctoral Research Associate, The Center for Digital Humanities and Manuscript, Rare Book and Archive Studies) wrote:

The Princeton Open HTR Initiative is a research infrastructure project to pilot a Princeton-specific instance of the eScriptorium handwritten text recognition (HTR) software. While the digitization efforts of recent decades have revolutionized access to historical texts in libraries, archives, and cultural heritage institutions, HTR is making these digitized materials machine-readable, opening them up to text search and computational analysis at scale. An increasing number of humanities scholars – including at Princeton – are eager to integrate HTR into their research workflows. However, barriers to entry can be significant, particularly in technical expertise and cost.

Starting this summer, the CDH team will focus on the technical design of a pilot instance of the open-source eScriptorium software configured to work with the Princeton high-performance computing environment. The goal is to undertake the initial research and development to determine if a central resource for handwritten text recognition is possible.

Marxism’s Marx

In his proposal, Edward Baring (Associate Professor of History and Human Values) wrote:

The global success of Marxism is one of the most important developments in modern intellectual history. By the mid-twentieth century, Marxist ideas had come to inform thinkers and activists on every inhabited continent, with enormous consequences for local and global politics. However, the international diffusion of Marx’s texts and the appeal of his ideas around the world were by no means preordained, for Marx had focused his analytical attention on the economic histories and conditions of Western Europe. This project aims to build a resource that will help scholars study this development by allowing them to understand how Marxists globally drew from Marx’s corpus of writings.

Beginning in the Fall of 2024, the CDH team will focus on developing methods for identifying quotations from Marx within a small subset of Marxist literature. The goal is to lay the methodological groundwork for the larger project of identifying the uses of Marx within a multilingual corpus of Marxist literature.

Music Theory in the Plural

In their proposal, Anna Yu Wang (Assistant Professor of Music) and Jürgen Hackl (Assistant Professor of Civil and Environmental Engineering) wrote:

Despite the enormous diversity of musical phenomena that exist across historical and cultural spaces, the majority of music-theoretical and scientific approaches in music studies have focused on interpretations of Western canonical source materials, neglecting a vast dataset of source documents from underrepresented languages and communities (Agawu 2003; Stover, Tilley, and Yu Wang 2020). To date, most global discourse on music theory remains untranslated, which limits the possibility of building equitable relationships among global music communities and privileges intellectual traditions occurring in European languages, particularly English.

In Spring 2025, the CDH team will focus on assessing the use of multilingual LLMs for tracking concepts between texts, based on a controlled vocabulary developed by the project PIs. The goal is to assess and develop the capacity to link concepts in music-theoretical texts across languages as part of a larger project to expand the “canon” of music theory.

We look forward to sharing more about each project as they begin in the coming months!

For more information about Collaborative Research Grants, past projects and grantees, and the application process, please visit https://cdh.princeton.edu/engage/grants/cdh-research-grants/.

❌