阅读视图

DH2026 프리 컨퍼런스: DH 교육 및 훈련 분과(SIG) 주관 ‘교수법 포스터 슬램’ 투고 공모

ADHO 산하의 DH 교육 및 훈련 SIG (DH Pedagogy and Training SIG)가 대전에서 열리는 DH2026 사전 행사(Pre-Conference)기간 중 “포스터 슬램(Poster Slam)” 행사를 개최합니다.

이 행사는 7월 27일(월) 혹은 28일(화) 중 하루, 워크숍 세션 시간에 진행될 예정입니다. 기본적으로는 교육용 과제(Assignment)나 강의계획서(Syllabus) 전체를 공유하는 포스터 세션 형식이지만, 본격적인 세션에 앞서 참가자들이 자신의 교육 내용을 120초 동안 압축적으로 소개하는 ‘슬램’ 시간이 먼저 주어집니다. 슬램 이후에는 일반적인 포스터 세션처럼 발표자와 청중이 자유롭게 질의응답을 나누게 되며, 이때 배포용 강의계획서 등을 지참하는 것이 권장됩니다.

주요 사항:

발표 대상: DH 관련 수업의 특정 과제(assignment) 혹은 전체 강의 전체

제출 마감: 2026년 1월 5일 (결과 통보: 2월 5일)

제출 내용: 수업/과제 배경(150단어), 상세 설명(200단어), 강의계획서 또는 과제 파일 업로드

유의 사항: 본 행사는 DH2026 본 회의(Main Conference)의 포스터 세션과는 별도로 진행되는 SIG 워크숍입니다. 형식이 크게 다르지만, 본 회의와 이 워크숍에 중복으로 포스터를 투고하는 것은 가능합니다.

The DH Pedagogy and Training SIG will hold a Pedagogy Poster Slam at the 2026 DH Conference in Daejeon, South Korea. The Pedagogy Poster Slam will take place during the SIG’s reserved slot during one of the conference’s two workshop days, either Monday, 27 July 2026, or Tuesday, 28 July 2026.

Posters will either focus on a) a specific assignment for a DH or DH-inflected course or b) an entire course. During the workshop, participants will first participate in the Slam, where they will have a maximum of 120 seconds to discuss one aspect of their assignment or course.

Once the participants have all spoken, the rest of the workshop will be devoted to a conventional poster session, where attendees can engage with the presenters at their posters. Presenters will be encouraged to bring print copies of their assignment/syllabus to distribute to attendees.

All conference attendees will be welcome to attend the workshop and slam. (Plus, there will be food!)

Call for Proposals
The SIG Conveners invite all members of the DH community to submit proposals for the Pedagogy Poster Slam. Proposals will include

  • 150 words or fewer about the context of the assignment, course, or workshop
  • 200 words or fewer about the assignment, course, or workshop
  • An upload of either the assignment or the course syllabus

Proposals should be submitted via the following Google Form: https://forms.gle/zdkNEHXPDppXQz3E6.
Proposals are due by 5 January 2026. Accepted participants will be notified not later than 5 February 2026.
Proposals will be peer reviewed by the SIG Conveners.

N.B. Please be aware that posters submitted to the pre-conference SIG Pedagogy Poster Slam are distinct from the DH2026 poster submission category. The Program Committee for DH2026 has indicated that you are welcome to submit posters to both the main conference as well as to this SIG workshop, although the submission format is significantly different.

Depositing Posters
Following the Conference, the SIG Conveners will create and curate a collection of the posters and associated pedagogical documents.

Questions
Please contact the conveners with any questions you may have.
Brian Croxall (brian.croxall@byu.edu), Diane Jakacki (dkj004@bucknell.edu) and Walter Scholger (walter.scholger@uni-graz.at)

게시물 DH2026 프리 컨퍼런스: DH 교육 및 훈련 분과(SIG) 주관 ‘교수법 포스터 슬램’ 투고 공모KADH / 한국디지털인문학협의회에 처음 등장했습니다.

  •  

DH 2026 연장 공고

2026년 DH 2026의 지원서 신청 기한이 1주일 연장되었습니다

새로운 신청 기한: 2025년 12월 15일 (월요일)

The DH2026 organizers announce that the submission deadline for Digital Humanities 2026 proposals has been extended to December 15, 2025 (KST).

Next year’s conference (July 27–31, 2026) will be hosted by the Korean Association for Digital Humanities (KADH) at the Daejeon Convention Center in Daejeon, South Korea. The theme for this conference is “Engagement.” Submissions are welcome in multiple formats, including long and short papers, posters, panels, workshops, and mini-conferences.

Please visit the Call for Proposals on the conference website for more details: https://dh2026.adho.org/cfp.

We invite you to share your work with the global Digital Humanities community.

게시물 DH 2026 연장 공고KADH / 한국디지털인문학협의회에 처음 등장했습니다.

  •  

미국 역사학회 (AHA): “역사 교육에서 있어 AI 활용 지침 원칙”

미국 역사학 학회 (American Historical Association, AHA)에서 2025년 8월 5일 (현지시간) 기준으로 “역사 교육에서 있어 AI 활용 지침 원칙(Guiding Principles for Artificial Intelligence in History Education)”을 발표하였습니다.

일단 미국의 가장 큰 역사학회에서 이러한 지침을 발표하였다는 것 자체도 의미가 있다고 생각을 하고 (가령, 한국에서는 과연 학회 단위로 이러한 지침이 발표된 것이 있을까요?) 몇 가지 (놀랍지는 않지만 특이한 내용을 본다면

  1. AI 시대에도 역사학 연구 교육은 중요하지만 변화 및 적응 필요
  2. 생성형 AI의 위험성 강조 – 특히 “불확실한 상황에서 근거 없는 확실함”을 준다는 점에 대한 경고
  3. AI 문해력의 필요성 중요 – 즉 “AI 금지”는 장기적인 대책이 될 수 없으며 따라서 AI 활용/이해 교육 필요
  4. 분명한 원칙 필요, 그러나 아직 세부적인 상황은 실험 중
  5. (Last but not least) 역사 교육/연구/사고는 여전히 유효하다!

아래 전문을 보시거나 원문링크를 통해서 확인 가능

Guiding Principles for Artificial Intelligence in History Education

Approved by AHA Council, July 29, 2025

In 2023, the American Historical Association Council charged an Ad Hoc Committee on Artificial Intelligence in History Education with exploring the implications of generative artificial intelligence (AI) for history teaching and learning (see the glossary in Appendix 1). A separate committee is focused on research and publication, which falls outside our purview.

This committee recognizes that generative AI tools offer significant opportunities to improve teaching and student learning. At the same time, we respect the concerns expressed by history educators, many of whom feel overwhelmed, distracted, or frustrated by these technologies.

While generative AI is undeniably powerful, it cannot replace human teachers. The most extreme proposals to automate education betray a fundamental misunderstanding of teaching and learning, the core competencies we aim to cultivate in students, and the deeply human-centered work of education. Indeed, the rapid adoption of AI tools suggests that it has never been more important to appreciate the complexity of our shared past and what it means to be human.

History educators have been seeking guidance on how to responsibly and effectively incorporate generative AI into their teaching practice. Some have voiced concerns about the challenges of maintaining academic integrity; others have raised important ethical, environmental, and economic objections to these technologies and their application. We minimize none of these concerns. Given the speed at which technologies are changing, and the many local considerations to be taken into account, the AHA will not attempt to provide comprehensive or concrete directives for all instances of AI use in the classroom. Instead, we offer a set of guiding principles that have emerged from ongoing conversations within the committee, and input from AHA members via a survey and conference sessions.

Contents
I. Historical Thinking Matters
II. Generative AI and Its Limitations
III. AI Literacy
IV. Concrete and Transparent Policies
V. The Value of Historical Expertise
Appendix 1: Glossary
Appendix 2: Example of an AI Policy Table for Use in History Education
Additional Resources

I. Historical Thinking Matters
Historical thinking remains essential in an age of AI.

The rapid emergence and continuing evolution of generative AI is transforming our relationship with technology. We approach this moment with confidence in the value of human-authored interpretations of history and humility about our predictive capacities. Peter N. Stearns’s classic essays—“Why Study History?” and “Why Study History? Revisited”—are made no less relevant by the advent of large language models (LLMs) and generative AI.

Many disciplines and professions are changing; the historical discipline will too.

While we cannot predict the future, generative AI is already reshaping many disciplines and professions. As has been the case with other technologies, historians will find distinctive ways to work with generative AI. The need for history and history education will not disappear.

Generative AI can mimic some of the work done by historians and history educators. This should not be mistaken for teaching or for learning. Far from rendering the discipline obsolete, generative AI may increase the demand for historians’ specific skills as societies and workplaces navigate an increasingly complex information landscape. The ability to act as subject matter experts, undertake extensive research, synthesize complex secondary literature, and look for biases, inaccuracies, and limitations are invaluable in an age of generative AI. Critical, too, is our disciplinary commitment to accuracy, complexity, and nuance, which remains at the center of historical training.

II. Generative AI and Its Limitations
AI produces texts, images, audio, and video, not truths.

Generative AI is a remarkable technological achievement, but it has undeniable limitations. An awareness of these limitations is important for instructors and students alike. LLMs produce text using an algorithm to select each word from existing books, articles, images, and other media, including AI-created sources. AI texts do not reflect truth; rather, they echo and synthesize, sometimes poorly, sources on which the model has been trained. Generative AI reproduces the limitations of its own training material. By contrast, historians learn to identify and dissect author biases, experiences, social environment, and hidden motivations. Students need to learn to interpret AI-generated content with a critical lens, using their historical training to assess material rather than passively accept it as true or complete.

For all its capacities, generative AI regularly hallucinates content, references, sources, and quotations.

AI models are trained to identify and reproduce patterns, not to comprehend the world in all its complexity and contradictions. If a pattern leads to a false, biased, or imagined output, AI has no way to self-correct. Commercially available generative AI algorithms prioritize speed over accuracy. Given a large task, an AI tool will eagerly invent fictional answers that complete its prompt more quickly, a process often referred to as hallucination. It is essential for students to understand that generative AI can hallucinate data and that historians work to counter these hallucinations when they appear. AI introduces new possibilities for fabricated sources; students must be trained to critically assess all outputs and to recognize that any information provided by a generative AI tool could be false unless properly verified. Evaluating the reliability of sources and assessing the validity of claims are core components of historical thinking and remain especially relevant today.

AI introduces a false sense of certainty where uncertainty exists.

Historians understand that there are things we know about the past and much that eludes us. Generative AI tools risk promoting an illusion that the past is fully knowable. Multimodal models, capable of processing input in one medium to generate content in another, can fabricate strikingly clear visual representations of historical moments that never existed, while chatbots simulate conversations with historical figures as if they were speaking with us directly. These outputs do not represent authentic reconstructions of the past—they are fabrications based on statistical patterns in existing, often flawed datasets. A good history class teaches students to work within the gaps and silences of the historical record, stressing that uncertainty is not a failure but a fundamental feature of historical inquiry. Helping students recognize this fact is essential in an age of AI-generated content.

III. AI Literacy
Banning generative AI is not a long-term solution; cultivating AI literacy is.

Students of all kinds already rely on generative AI tools and will continue to do so. Some committed educators have chosen to reject generative AI for its ethical, environmental, and economic consequences, but ignoring this technology will neither halt its spread nor shield our discipline and students from its reach. We have a responsibility to help students understand these issues in historical context and make informed decisions about their future application. Even if history instructors emphasize in-class writing assignments and exams, the influence of generative AI will be felt both in and outside the classroom. Students will want to use generative AI’s formidable tools and will need to understand its limitations. This committee believes that blanket bans are neither practical nor enforceable. Even those who choose to advance student learning in an AI-free environment will have to engage with these technologies. We must determine how to do so responsibly and effectively. Our task is to help students build the critical skills to navigate these tools. Students are already seeking guidance. One of the most meaningful contributions we can make is to support the development of intentional and conscientious AI literacy.

Generative AI can be a valuable partner in the classroom.

Generative AI can be a valuable collaborator for users who know what to ask and how to correct errors. It can enhance teaching and provide a resource for classrooms. It can speed up preparation and suggest alternative or enhanced learning assessments. Generative AI allows seemingly limitless possibilities for assignments that cultivate crucial literacies. For example, a student could be asked to compare an AI-generated summary of an academic article with the original text, assessing what the AI engine gets right, what it gets wrong, and whether the article’s most important contributions have been recognized. Such tasks help students cultivate analytical skills while fostering a more nuanced understanding of the strengths and weaknesses of generative AI. It can also prompt students to engage with the original article more deeply, building skills of historical thinking while fostering AI literacy.

Creativity is even more essential in an age of generative AI.

Some forms of assessment, even those hallowed by time, may disappear. Assignments such as short summaries can be easily duplicated by generative AI. On the other hand, creative assignments such as the unessay or in-class role-playing exercises along the lines of the Reacting to the Past series will likely become even more valuable.

Training future history educators requires clear and transparent engagement with generative AI.

The teachers of tomorrow, whether K–12 instructors or higher education faculty, are students today. Generative AI will likely be one of the most significant professional issues they encounter. Current history educators have a responsibility to model appropriate engagement with generative AI and to equip future teachers with the ethical frameworks and practical skills needed for their careers. It is essential to prepare future teachers not by abandoning traditional historical training but by combining it with new AI literacies. The core skills of the historical discipline—extensive research, careful source evaluation, critical reading—remain foundational. If anything, the rise of generative AI makes these skills even more essential. At the same time, future educators must be AI literate, which means learning how generative AI systems are trained, how to recognize bias and hallucinations in generative AI outputs, how to use AI tools to support (rather than replace) critical work, and how to teach our students to do the same. Training the next generation of history educators requires that we hold fast to the disciplinary core of history while expanding the professional toolkit available to future teachers.

IV. Concrete and Transparent Policies
History educators must develop concrete and transparent policies for AI usage and communicate these to students.

Students are navigating a vast and rapidly changing technological landscape with few settled rules. It is essential for history educators to provide clear, consistent guidance for students at all levels and to talk openly about these technologies and their limitations. All syllabi should include explicit generative AI policies that specify when these tools may be used and when they are prohibited (for examples, see the additional resources below). Syllabi should also affirm core scholarly principles—most importantly, the obligation to cite all sources, including AI-generated material. The rise of generative AI does not alter the expectations that underpin historical scholarship. The specific citation format is less important than the act of acknowledging the use of generative AI. Students should be taught how to do so routinely and accurately. Setting expectations openly in a supportive atmosphere—perhaps including an AI-use section in each assignment—encourages students to develop responsible habits without fear of penalty for honest disclosure. Vague or inconsistent expectations risk serious consequences, including unintentional academic misconduct or professional harm. As a possible starting point, we include a model table in Appendix 2.

Experiment, reflect, revise.

No single generative AI policy will be perfect. A landscape in which technologies are evolving rapidly calls for a flexible, experimental, and iterative approach: try new tools and policies, observe their effectiveness, gather student feedback, and revise as necessary. What works this semester may require significant adjustment next year. Teaching AI literacies will require engaging with these technologies and modeling ethical AI engagement. We cannot ask our students to cite content produced or adjusted by generative AI if we do not adhere to this rule ourselves.

V. The Value of Historical Expertise
Generative AI cannot replace historical methodology.

Historical inquiry involves gathering information, making connections, and interpreting evidence in ways that reflect both an individual mind and established disciplinary standards. As a deeply human endeavor, writing history is both science and art. Great works of history are transformative because they are neither predictable nor obvious; therefore, they cannot be replaced by a technology that simply reproduces existing patterns. Generative AI systems are powerful pattern-recognition tools that are also fundamentally limited. They do not think historically; they predict based on past data rather than questioning or reinterpreting it. AI cannot surprise us with new historical arguments, creative reframings, unpublished materials, or original narratives that challenge established understandings. The vast wealth of human history contained in gated archives and nondigitized material is inaccessible to AI engines. At the same time, these tools cannot substitute for rigorous historical methods: finding new sources, posing generative questions, weighing evidence, assessing context, grappling with uncertainty, and constructing original arguments.

There are no shortcuts to expertise.

Evaluating AI-generated content requires expertise that can be built only through sustained engagement with the subject matter. LLMs present a crucial paradox: they can produce material that appears polished and credible, but assessing their outputs demands critical skills that the models themselves can neither teach nor foster. If students rely on generative AI without developing their own skills, they risk entering an unproductive loop: minimal engagement leads to an inability to properly assess outputs, which leads to an uncritical acceptance of flawed material. Our goal is to foster a different trajectory, whereby generative AI is seen as a tool that supports the pursuit of knowledge, not a shortcut that replaces meaningful work. Through active engagement and skill-building, students can use AI thoughtfully by integrating outputs that genuinely improve their work and rejecting those that do not. For example, a student who drafts an essay and uses an LLM to refine its language or sharpen phrasing will need strong writing and analytical skills to evaluate whether AI has served its purpose. In short, expertise must precede AI reliance.

History education must continue to cultivate habits of mind that current and future students will rely on to thrive in a world shaped by generative AI.

Generative AI can produce college-level essays, complete complex research tasks, mimic historical figures, and create realistic-looking historical images and media. New capabilities are emerging every day. We cannot predict the long-term trajectory of these technologies, but we can recognize that we are in the middle of a period of profound change and take steps to meet the moment and prepare our students for the future.

Historical thinking will continue to matter. The study of history prepares students to account for change over time, to recognize the complexity of human existence, and to wrestle with the contingencies that define life in an uncertain world. Everything has a history, and to think historically is to contemplate what it means to be human.

  •  

미국 대학 현장에서의 AI – NYT 기사 두편

최근 미국의 뉴욕타임즈에서 미국 대학 현장에서 AI 사용에 대한 흥미로운 기사 두 편이 일주일 사이에 게재되었다.

(링크된 기사 모두 NYTimes의 “선물하기” 기능을 이용한 것으로, 유료 계정이 없어도 볼 수 있습니다)

첫째는 미국의 교수들이 ChatGPT를 사용하고 있으며, 이에 대해서 학생들의 불만이 있다는 기사

“The Professors Are Using ChatGPT, and Some Students Aren’t Happy About It”

둘째는 학생들이 AI 사용을 하였다고 잘못 오해 받는 것을 방지 하기 위해서 자기의 글 작성 과정을 통째로 녹화하고 있다는 기사

“A New Headache for Honest Students: Proving They Didn’t Use A.I.”

한국 현장에서는…AI 등등에 대한 얘기가 대학 현장에서도 많이 보이기는 하지만, 아직까지 비슷한 얘기를 들은 바가 없네요. 혹시 비슷한 사례를 경험하신 분들이 있나요?

  •  

Rethinking ‘Sinicization’ through Data: A Network Analysis Methodology for Yuan Dynasty Studies

독일 Digitale Geschichtswissenschaften an der Humboldt-Universität zu Berlin (“베르린 홈볼트 대학 디지털 역사학”) 주최로 개최되는 온라인 학술 발표입니다.

발표제목: “데이터를 통해 보는 ‘한화’ 문제의 재검토: 네트워크 분석을 활용한 원대사 연구(Rethinking ‘Sinicization’ through Data: A Network Analysis Methodology for Yuan Dynasty Studies)”
발표자: Wonhee Cho (조원희, 연세대학교 사학과)

발표 요지: This presentation revisits the concept of “Sinicization” during the Yuan dynasty by employing network analysis to evaluate the centrality and cooperative dynamics of officials in Yuan court. Using data derived from the Official History of the Yuan (Yuanshi), this research interrogates the validity of traditional narratives of cultural assimilation, particularly the extent to which the Mongol administration in China adopted Han Chinese governance practices. By focusing on the relationships among central officials, this study sheds light on the evolving power dynamics between Han and non-Han elites under Mongol rule.

The methodology centers on what I have named “cooperation network analysis,” which defines connections (or edges) as instances where two or more officials were assigned to collaborate on a task, shared appointments within the same government office, or were ordered to work on common policy initiatives. Using these relationships, a network graph was generated, with nodes representing individual officials and edge weights reflecting the intensity of their interactions. Centrality metrics, particularly eigenvector centrality, were applied to identify key actors within the network and assess their influence in the administrative hierarchy.

This approach revealed patterns of collaboration that traditional qualitative methods often overlook, enabling a nuanced understanding of the administrative structure. For instance, rather than relying on preexisting assumptions about Han dominance, the analysis highlights the fluidity of administrative roles and the Mongol court’s pragmatic reliance on both Han and non-Han officials. By visualizing these interactions, the research identifies overlooked figures whose importance is substantiated through their extensive connections and influence, challenging the Sino-centric bias of the historical record.

The study also underscores the methodological challenges inherent in such an analysis, including the limitations of data consistency in the Yuanshi and the absence of biographical details for many non-Han officials. Despite these obstacles, the findings illuminate the Mongol court’s strategic adaptations in response to political and social pressures, rather than a straightforward adoption of Confucian bureaucratic models.

By integrating digital humanities tools with historical inquiry, this research not only refines our understanding of the Yuan administration but also reframes the broader historiographical debate on “Sinicization.” It demonstrates the potential of network analysis as a methodological bridge between qualitative source analysis and quantitative data-driven approaches, offering a replicable framework for investigating the interplay of ethnicity and governance in imperial contexts.

발표 링크 및 추가 안내 – 공식 인터넷 링크 참고

  •