This blog, in a lot of ways, is the result of putting into action my previous blog, and to get a better gist of what the workshop itself entailed, please see that blog.
In running a workshop on GIS mapping using 3D-printed maps and pins, one conclusion, point of error, questions to consider (call it what you will) came to mind: when practicing a Digital Humanities workshop aimed at teaching a specific digital tool without the digital, while also using Latinx materials, what is it that gets missed? Is it possible to do due diligence on both fields in a limited time with an audience of mixed knowledge? Does the digital tool come before the context of the world in which it is used? And why do these questions trouble me so much? Am I alone in my concerns?
By all of these questions, what I mean to ask, reworded, is there and does there have to be a difference between teaching the workshop I did in a future Latinx class with a DH section on the syllabus versus running a workshop with a general DH group on a Latinx topic while focusing not on the Latinx portion but instead the mechanics of a tool? Theoretically, in a Latinx topic class, the history and specifics of what is being plotted, which in this case were different Latin American migrant experiences in the country of Mexico, would already be explained in lectures and readings. There are no underlying assumptions of knowledge left open in the conversation fostered in a classroom with students. A general workshop functions a little differently, especially given time constraints and the fact that it’s a one-day event, and, at the end of the day, a single skill or specific point is valued.
Admittedly, in the last post, I left out the details of the sixteen flashcards in part because making them took a bit longer than I expected. Just like any other teaching material, the details and specifics mattered an unbelievable amount. So, I did what any literature major would do: I drew on my training and tried to ensure that the stories/narratives were as centered as possible. In practice, this means that nearly all the sixteen flashcards are snippets from documentaries, novels, memoirs, and government documents. This project, at its heart, was one of critical making, using 3D printing to embody the work of literary studies.
Once again, in a Latinx studies classroom, I would never run the workshop before spending time on the histories of Latin American migrations and the differences across decades. For example, in the early 2000s, into the turn of the second decade of the 21st century, there was an increase in unaccompanied children from across Central American countries fleeing gang violence, which was a direct result of the Illegal Immigration Reform and Immigrant Responsibility Act of 1996 (IIRIRA), which is a different moment, with its own specifics, of a broader history (Arias 2012). What is being written about and documented reflects this period. Therefore, the flashcards also aim to reflect not only the different migrant experiences in Mexico but also who is migrating. In my previous blog post, I used Ana Patricia Rodriguez’s framework of accompaniment to argue that, by plotting the flashcards onto the map, a participant is also “positioned, if not prodded, to question the conditions.” The hope in a classroom is that students can see the differences across decades as they plot the snippets and be “prodded” to make connections. Turns out, in a workshop, this is slightly harder to pull off successfully.
Answers to my Questions
I didn’t realize just how much I would appreciate the act of running a workshop, the practice it would provide, and all the questions it would raise for me as a teacher. I say this not as a value judgment but as a point of improvement and awareness: in many ways, the workshop failed. I failed. And I love the fact that I failed. And next time I might fail again, and I might not, but I look forward to it nonetheless.
Because in failure and in writing this blog, the answers to my own questions, at least for myself, become slowly clearer and clearer. So, I will now answer the questions using the experience of the workshop itself. What gets missed? I learned that if we center a digital tool too much and not the world it is situated in, the Latinx histories fall a little to the side. Once again, my failure. Is it possible to do due diligence on both fields? That one is a little harder for me because the reality is similar to Environmental humanities, where Priscilla Solis Ybarra, David Vazquez, and many more have demonstrated gaps of Latinx representation in other fields, but also a mirrored gap in Latinx studies. These gaps complicate one’s ability to do full diligence in a limited time. I repeat, my failure. Does the tool come before the world? On the day of this workshop, it did. In part because the intent of the workshop was to teach the tool, and I focused on that too. Why do these questions bother me so much? Question for a later day. Would I teach it differently in my future Latinx studies class? Absolutely.
As part of my tradition, a place of thanks. I am so very grateful to have had the opportunity to run this workshop and to find joy not only in critical making but also in failing. I want to not only thank my praxis group and those who were there for the workshop, but also my amazing advisors who encourage all my side-quests, even when they include ducks.
References
Arias, Arturo. “Central American-Americans in the Second Decade of the Twenty-First Century: Old Scars, New Traumas, Disempowering Travails.” Diálogo, vol. 15, no. 1 (2012): pp. 4-16.
Vázquez, David J. Decolonial Environmentalisms: Climate Justice and Speculative Futures In Latinx Cultural Production. Austin: University of Texas Press, 2025.
Ybarra, Priscilla Solis. Writing the Goodlife: Mexican American Literature and the Environment. Tucson: The University of Arizona Press, 2016.
Drew Macqueen learned about processing & geotagging photos, collaborating with a local artist & Library colleague to map & visualize photos of Virginia taken through the windows while riding trains throughout the state.
Tealis a centralized AI platform to build your resume and apply to positions you’ve dreamed of obtaining. It offers a user-friendly design to easily cater and make your abilities and qualifications stand out to the biggest companies. You can create or import your current resume, edit it with AI recommendations, and use Teal’s in-browser job search to find and apply to constantly updated positions.
Here are our ratings for a general gist of this tool!
Functionality: 4/5
Accessibility: 3.5/5
Cost: 4/5
Privacy and Security: 4/5
Overall Score: 4/5
Again, read more to learn more about this tool and how we justified the ratings!
With entry and advanced career positions becoming more and more competitive, Teal can offer current students and graduates an increased chance of securing their next opportunity.
Features
Teal has a lot more than just AI features for resume building,
AI Resume Builder: Teal provides users with options to create or enhance their resumes with minimal effort with AI suggestions to make experiences stand out to specific jobs.
ATS Checker: Many companies use Applicant Tracking Systems (ATS) powered by AI to filter candidates. Teal analyzes your resume, provides a score, and offers recommendations to improve competitiveness in your field.
Job Search: The platform also hosts an extensive database of positions for users to apply to with options to query, sort, and filter for perfect job opportunities.
AI Job Search: Teal also offers a way to let AI conveniently gather job recommendations based on your prompt. This is a great way to get a general idea of what you’re going to be looking for in a position.
Job Tracker: Beyond searching, Teal lets users track their applications. The tracker offers convenience with the location, salary, deadline, and additional information about positions.
Interview Practice: Teal sets itself apart by covering this crucial aspect of the application process. It offers different scenarios and questions a recruiter may ask, and use AI to realistically proctor and give immediate feedback to the user.
Chrome Extension: The extension makes it easy to stay on top of applications by quickly tracking goals and finding interesting opportunities while browsing.
Pricing and Access:
Although Teal has many useful tools in the job search, it is undoubtedly limited behind a steep paywall of $13 per week, $29 per month, $79 every 3 months. The Teal+ subscription offers unlimited usage of AI throughout the application. The free trial gives 10+ credits for AI to refactor and build new resumes and cover letters from scratch. Luckily, it’s only the aspects of Teal that uses AI that are affected by this price. Features such as interview practice, job search, etc. are all completely free for users to enjoy.
Creating Your First Resume
You’ve never written a resume before and aren’t sure what’s relevant. Teal guides you step by step, helping you enter your background and then transforming it into a professional resume with AI-powered enhancements.
Rehearsing for Popular Interview Questions
You get asked, “Tell me about a time you failed,” and your mind goes blank. Teal’s Interview Practice has already prepared you for this exact curveball.
Enhancing Resume Syntax
Your resume is filled with the phrase “responsible for”. By using Teal’s AI suggestions, it can find other impactful action verbs that actually make you sound like you did something.
Tinkering with Job Qualifications
You find the description for your dream job at your dream company. You think your qualifications are a great fit, but your resume doesn’t reflect it. Teal’s ATS Checker highlights the right keywords from the job description and shows you how to optimize your resume so it makes it through the company’s filters.
Remembering your Applications
You mass-applied to roles yesterday, but can’t remember if it was Spotify or Shopify. Teal’s Job Tracker and Chrome Extension keep your applications organized so you never mix them up.
Watch out for…
Resumes are supposed to represent you, so consider these:
Over-optimizing your resume: AI can exaggerate or obscure details. Always review each suggestion to ensure accuracy and honesty.
Buzzword stuffing for ATS: Adding too many buzzwords may get past automated filters but can make your resume read unnaturally to recruiters. Balance is key.
Generic phrasing from AI: Some AI suggestions may be vague or overused. Always make sure to refactor phrases that represent you.
Privacy of uploaded data: Your resume and job applications contain sensitive information. Make sure you’re comfortable with Teal’s storage and privacy policies seen here.
Our Verdict:
Functionality: 4/5
Teal offers a robust way to enhance your resume and document, helping you stand out to recruiters. It’s great in offering more than just resume rewrites, where aspects of holding mock interview questions sets it apart from similar tools on the internet. While its AI suggestions are generally helpful, some content may not perfectly align with users’ experience, and template variety is somewhat limited.
Accessibility: 3.5/5
Teal is web-based and can be accessed from any device, added with the Chrome extension for easy job saving from multiple boards. However, the platform can feel unintuitive at first. Given the breadth of features, users may need some time to fully understand how to use them effectively.
Cost: 4/5
Teal’s free version allows unlimited resume creation, downloads, and basic AI support. For more serious job seekers, the Teal+ subscription unlocks advanced features and additional templates. While valuable, some users may find the premium pricing a bit steep.
Privacy and Security: 4/5
Teal implements solid security measures, and their privacy policy clearly outlines data handling practices. That said, as with any platform that requires personal information, some inherent risk remains.
Overall Score: 4/5
Teal is a solid, AI-powered resume builder with useful features for job seekers. Its free tier is sufficient for most users, while premium features offer extra benefits. Minor accessibility issues and AI limitations prevent a higher overall score.
Whether it be your first time creating or searching for better ways to optimize your resume, Teal is an amazing tool. Try it here!
Over the weekend, one of the amazing student Technologists, Link did a clean and reorganizing of the resin 3D printer station. The printer gives off some nasty fumes, so she was able to procure an air purifier set up just for such printers. Unfortunately, the model available doesn’t directly connect with our Prusa SL1S. Link put the air filter in place, but had to resort to duct tape to get it ti connect to the resin printer. It didn’t work.
So when I came in this morning and saw the need for an adapter to the adapter, I knew what I was going to do today!
I spent some time thinking about the best options. An insert with magnets? But how does the original adapter stay put on the new adapter?
Well, there are screw holes, how about using them? Yep, that’s the ticket. Basically replicate the bottom of the original adapter so it can screw to the new, then add a whole bunch of magnets!
And it worked on the first try! I had to double up the magnets in order to make it strong enough to stay on, and the gasket printed in TPU could be a little bit thicker. But it was a great success!
The models are available on Printables.com for download and 3D printing.
Death by Numbers Receives Renaissance Society of America’s Digital Innovation Award Death by Numbers has been awarded the Renaissance Society of America’s Digital Innovation Award for 2026. This award “recognizes excellence in digital projects that support the study of the Renaissance.” Jessica Otis, associate director of RRCHNM, accepted the award this past month at the […]
And immediately I was confronted with an issue with my calculations.
The Problem
At the end of all my learning and calculating, I decided:
It looks like 36mm (servo) —> 12mm|36mm —> 12mm (pinion)
has smaller gears and gives good enough range.
One thing I forgot to consider is the length of the servo horn that is used to connect the servo to the gear. I could do without it, but trying to design and print such a small toothed hole has issues. I have seen others try and filament 3D printing does not provide fine enough detail to mesh well with the servo gear. So using the supplied horn attachment makes things much easier.
The problem, is that the horn is about 22mm in length. If my gear is only 36mm in diameter, then the horn would stick out into the gear’s teeth!
Another sidetrack bump I had to overcome was the getting the dimensions of the servo horn. The dimensions I could fine online were unsatisfactory. So I measured one myself!
And went ahead and 3D modeled it and put the 3D model and diagram files up on printables.com for anyone to use.
With all of that info, I can then recalculate the gear train dimensions so it fits with the servo horn.
The Correct Gear Train
I played around with different settings, but it seemed the best option (that being the smallest size for the servo and large combo gears) called for a 46mm servo gear → 20mm
The first day of modeling, I decided to jump the Fusion 360 train and try onshape.com. It’s a web based 3D modeling and CAD tool. It has been around since 2015, and is gaining ad time lately in many of the YouTube.com videos I see, so I thought I’d give it a try. I was prepared for some learning curves and to spend some time learning a new system, but two things got me to throw in the towel after a full day of working with it; 1) I couldn’t figure out how to do something pretty simple that would take 2 minutes in Fusion 360, 2) I didn’t care for the interface; it felt too unprofessional. If TinkerCad.com is the elementary school version of CAD, it looked like Onshape.com was the 9th grade version. I did love that it was browser based. And making double helical gears was a breeze! There’s a handy built in menu for all kinds of gears. Fusion 360 on the other hand is big L in gear making. You have to import 3rd-party scripts and I can’t get any of the fancy gear scripts to work.
Like many things, it was the fact that I could get things done much faster with the tool I already knew, and I was accustomed to the interface that led me back to Fusion 360.
Making the Gear Train
I had previous attempts at designing the gear train, but I decided to start from scratch since Fusion 360 doesn’t have an easy way to just change the size of gears when using the gear script plugin thing.
Servo Gear
So, first I designed the servo gear. Pretty easy to create a 46 tooth gear with the gear script plugin thing.
I designed a cut out, or inset, for the servo horn to fit inside. This is the easiest way to attach the gear to the servo. 3D printing these gears with filament would not have enough resolution to print the fine teeth needed to interface with the tiny default gear on the servo shaft. Much easier to use the included horn.
Combo Gear
The combo gear was pretty easy, too. Just make another 46 Tooth gear, then make a 20 Tooth gear and stack them on top of each other.
I set the diameter of the hole through the gear at 4.2mm. That’s big enough for a M4 bolt to go through, with just enough tolerance to allow the gear to spin but not wobble.
Pinion Gear
Another very simple gear to model. There’s nothing special about this, just a 20 tooth gear with a 4.2mm diameter hole.
Rack
The rack is pretty straight forward. I created a 20 tooth gear, then used one of those teeth to copy down the length of the rack.
Gear Holder
This was a little bit tricker. The gears were all prototyped in one go. The first print was great. This part took 7 tries so far.
I started by creating a new Assembly in Fusion. Then adding in the gears and aligning them as needed. I went with a stacked approach so as to keep the footprint as small as possible. I had previously modeled the servo motor, so I was able to add that in as well.
It was tricky to get the servo aligned with the servo gear, and then get each of the gears aligned with the ones the mesh with. In realized that if a part has the sketch turned on, then that shows up in the Assembly file. I used that to create a construction line on the servo gear and put a point where the center of the combo gear should be aligned to. Then I did the same on the combo gear to align the pinion. Then adding the holder, servo motor, and rack.
It was a lot of back and forth between the designs for the parts and the assembly to align everything correctly. But in the end I think it lines up well.
Spacers
After the first version, I realized that the gears needed spacers to keep them in place. The holder is wider than the gears. So modeling and printing a couple of spacers is pretty easy.
Somewhat Working
I connected everything up, bolted in the gears, and plugged it in. And it works… mostly.
As the video shows, the gears work, somewhat. There is a bit of jittering, which may be due to the code just rotating the gears back and forth. A more normal behavior would be moving from one angle to the next and stopping there. The servo is also not moving at a full 180°. More like 100°. This is only about 111mm of travel, not the 150mm we’re hoping for. It might be time to consider better quality servos. Perhaps some that move 270°.
It is also a pain to swap the servo motor. Perhaps a redesign is in order.
Members of the ETCL team, including Alan Colin Arce, Graham Jensen, Brittany Amell, and Ray Siemens, recently published an article titled “Multilingualism as Infrastructural Imperative: Language Diversity in Digital Knowledge Commons.” The article is available […]
Are you a humanities student (3rd-year BA, MA, or PhD) using digital methods in your research or thesis? And would you like to present you work to fellow students? We’d love to hear from you!
We – Finn Pietrass and Thomas Rozendaal, student ambassadors at the Centre for Digital Humanities at Utrecht University – are organising a student colloquium: The Digital Humanities Dialogue for Students (date TBA). This event is designed to give students insight into how digital methods can be applied across different humanities disciplines, and to inspire students to explore these approaches themselves.
We are looking for 2 to 3 student speakers from the Faculties of Humanities who are interested in sharing their experiences with digital methods in their studies or a research project. Presentations will be short (approximately 15 minutes) and aimed at a broad student audience. No prior presentation experience is required.
Why participate?
Present your research in a supportive, low-pressure environment
Gain experience as a speaker in an academic context
Discuss and exchange ideas with like-minded individuals
Sign up
Interested in participating or want to learn more? Get in touch via cdh@uu.nl. The deadline to sign up as a speaker is 12 April. We’d be happy to hear from you!
Ronda Grizzle has been learning about recording & editing podcasts using Audacity & the Library’s media system, because she starts recording the “DH is People” podcast episodes this week. (Coming soon to a SLab social media account near you!)
Construction of the brand-new research facilities for the ILS Labs at Drift 10 is now in full swing. Renovations are progressing rapidly and the soundproof booths that will house the labs are currently being installed. The technical installation of the first labs is expected to start in April.
The labs of the Institute for Language Sciences (ILS) are used to study language development in babies and language processing and production in adults. Much of this research involves the use of sound stimuli, which makes soundproof laboratories essential.
Drift 10: View from the outside. Delivery of materials (photo: Desiree Capel)
Ventilation hole drilled in wall. (photo: Desiree Capel)
Several parts of Drift 10 are currently being renovated and upgraded to accommodate the new facilities. The floors of both the ground floor and first floor are being fortified to support the booths and the ventilation system is being expanded. Once completed, the basement, ground floor, and first floor will house two biolabs, two eye-tracking labs, three phonetics/general-purpose labs, an interaction lab, and a head-turn-preference lab.
Frame installment of the baby eye-tracking lab (photo: Desiree Capel)
Cabin placed of the baby eye-tracking lab (photo: Desiree Capel)
Although the move is only a short distance – from Janskerkhof 13 to Drift 10 – it is a major logistical project. To ensure that ongoing research can continue with minimal disruption, the labs will be relocated one at a time. Moving each lab will take several weeks, and the full relocation is expected to be completed by the end of 2026 or beginning of 2027.
Take a look at the construction photos and the installation of the soundproof booths.
Project teams should consist of one faculty and one graduate student
as collaborators on
humanities research in the University of Virginia. We welcome proposals that explore
experimental humanities research through the use of high-performance computing
resources. We encourage
projects that challenge traditional understanding of digital humanities (or even
what has been considered humanities research), involve ethical and philosophical
issues raised by new technologies, or explore new opportunities for using
high-performance computing tools and techniques to better understand the human
record.
What does it mean to be a “Bruin” in a world shared with oaks, hawks, and mycelium? Under the direction of Dr. Vetri Nathan (Associate Professor, European Languages and Transcultural...
Ammon Shepherd and a SLab student alum used our makerspace to prototype a cast for glassblowing custom drinking glasses: cutting acrylic for a tank to hold a 3D-printed glass cast & plaster of Paris, figuring out both clay & glue were needed to best seal the acrylic.
Researchers from Utrecht University’s ArtLab are literally getting up close to art history. In the Grote Kerk in Alkmaar, they are using aerial work platforms to take thousands of photographs of the painted church vault.
Using advanced technology, the images will be combined into a detailed 3D model, allowing both researchers and visitors to explore the sixteenth-century artwork up close.
Read more about this ArtLab project with researchers Daantje Meuwissen, Sanne Frequin, and Sjors Nab in the article published by NH Nieuws on 5 March 2026 (in Dutch).
The Department of Digital Humanities in the Faculty of Humanities at Brock University is inviting applications from internationally recognized scholars for the Canada Impact+ Research Chair in Artificial Intelligence and Social Change, a senior academic position based in Ontario.
The role is part of the Canadian government’s Canada Impact+ Research Chairs program, a $1 billion initiative designed to attract leading researchers from around the world to address major societal challenges. The selected candidate will develop a research programme exploring how AI is designed, governed, and experienced in real-world social contexts.
Are you a Digital Humanities student or early career researcher in Belgium who would like to discuss DH with other early career researchers in the Belgian DH community? If so, you might be interested in joining the DH Virtual Discussion Group for ECRs!
The DH Virtual Discussion Group is a joint initiative organized by individuals at multiple Belgian institutions. We strive to involve speakers from all Belgian institutions and encourage participation from all those who are interested in DH and are located at any Belgian institution. This series, the core organizers are Leah Budke (KU Leuven), Tom Gheldof (KU Leuven, CLARIAH-VL+), Paavo van der Eecken (University of Antwerp), and Loren Verreyen (University of Antwerp). Over the past years, the series has become a regular event. The spring 2026 edition proudly marks our twelfth term.
Our first two sessions this spring will continue the “under-the-hood” format, which entails a volunteer from our community providing a thirty-minute overview of a digital project implementing a given tool, approach, or platform. This is not meant to be a polished research presentation, or to present findings or results, but rather to give our community a behind-the-scenes look at how decisions were made and why specific tools were chosen or developed. The hope is also that this presenter will give attendees some ideas about how to get started implementing a specific tool or workflow, and that they can also answer questions or contribute to a discussion on other projects in our community that might be using similar methodologies or addressing similar issues. This “under-the-hood” session format allows us to have focused discussions around a specific project where we can learn from each other in an informal way. In addition, by implementing this format we can maintain the low threshold for contributing and engaging in the conversations.
Our final session will be a special in person session during which members of our community can give an elevator pitch of their DH Benelux contribution.
The spring 2026 schedule will be updated as details about upcoming talks are confirmed. Please check back here or on the website (linked above) for full details. Information about each session will also be circulated via the mailing list.
Session 1 Date: Monday 30 March, 15h-16h30 CEST via Teams Speaker(s): Julie Van Ongeval, VUB Title: The Fall of Antwerp (1585) as a linguistic turning point? Language change from macro- and micro-perspectives. Abstract: The Spanish recapture of Antwerp (1585) during the Eighty Years’ War, known as the Fall of Antwerp, marks a crucial turning point, not only from a historical but also from a linguistic perspective. Historically, the Fall triggered profound social, economic, and demographic transformations. Prior to 1585, Antwerp had flourished as one of Europe’s largest and most prosperous cities, characterized by substantial immigration. In the aftermath of the Fall, however, the city experienced severe socio-economic decline and large-scale emigration, causing its population to decrease by more than half (from 100,000 inhabitants in 1580 to 42,000 in 1589) (De Meester 2011, Lesger 2007). From a linguistic standpoint, the Fall has traditionally been associated with what De Vooys (1970) termed “the decline of the Southern Netherlands”. The event is believed to have shifted the linguistic center of gravity to the Northern Netherlands, slowing down or even halting the ongoing processes of language standardization in the Southern Netherlands and, by extension, in Early Modern Antwerp (Van der Sijs 2020). Yet, these linguistic claims have primarily been based on printed, literary, or explicitly normative texts. Considerably less is known about language use in more informal and everyday contexts (Elspaß 2020).
This study addresses that gap by analyzing informal, handwritten letters preserved in the newly developed Early Modern Antwerp Corpus (1564-1653). Drawing on Dixon’s punctuated equilibrium model (1997), which proposes that significant historical events can accelerate linguistic change, we test an alternative hypothesis: rather than causing stagnation, the Fall of Antwerp may have triggered intensified linguistic variation and change. To assess this hypothesis, we examine six linguistic features that were undergoing change and were relevant to the process of Dutch standardization (clause negation, verbal cluster order variation, schwa apocope, the prefix ge- in past participles, word-final /k/, spelling of /ɣ/ in onset). First, we analyze developments at the community level to identify broader patterns of change. We then adopt a more microscopic perspective, investigating how individual writers respond to the shifting sociohistorical context. This includes both inter-individual variation (e.g. social categories and networks) and intra-individual change across the lifespan. By investigating the linguistic consequences of the Fall of Antwerp from both macro- and micro-level perspectives, this study aims to bridge the three waves of sociolinguistic research, integrating community-level patterns with individual-level variation and change.
Session 2 Date: Monday 20 April, 15h-16h30 CET via Teams Speaker(s): Léa Hermenault, UA Title: The Belgian Historical Gazetteer: (historical) toponyms in a digital era Abstract:My presentation will introduce the Belgian Historical Gazetteer, a project founded by CLARIAH-VL+ and hosted at the University of Antwerp. This project aims to set up a historical gazetteer of toponyms for the whole present-day territory of Belgium, in order to provide researchers with a collection of data that does not stop at Belgian provincial borders and which goes beyond the level of municipalities.
First, I will explain how the gazetteer is constructed using both automatic extraction of text from old maps and manual corrections/additions. Then, I will show how this gazetteer will help researchers deal with place names that appear in their sources. Finally, I will demonstrate the potential of digitized lists of historical place names for both toponymic and landscape studies which make digital gazetteers, aside from their classic function, innovative exploring tools.
Session 3– Special In-Person DH Benelux Session Date: Monday 18 May, 13h30-16h CEST, Location: room 1.01 Gogotte, Hoek 38, Leuvenseweg 38, Brussels (location is within walking distance from the central station) Speaker(s): various members of our community Format: elevator pitches of DH Benelux contributions
There are an increasing number of conferences, workshops, and funding opportunities in DH, and we would like to ensure that you are aware of them. We will start every session with a moment for individuals to share news about upcoming lectures, workshops, seminars, and conferences. We have a corresponding Slack group where we also share these opportunities both during the discussion group meetings and in between. The link to join the Slack group is included in every email sent out to the mailing list, so watch for it there or send us an email to request access.
If you would like to register or invite other colleagues to join, please complete the registration form for the mailing list here. Please note, if you have received emails from us about the Discussion Group in the past, it means you are already on our mailing list. In that case, there is no need to register again—you will receive the emails with the MS Teams link and any additional information on the day of the session. Additionally, you will also receive updates on upcoming sessions including further details about speakers and the “under-the-hood” presentation topics.
Are you a frequent attendee of the DH Virtual Discussion Group and would like a low-threshold way to become more involved in the organization? We are looking for ambassadors to promote the group within their university networks. If this might be a role you would like to take on, get in touch and we can tell you more!
These events are only open to KU Leuven researchers and staff
To support researchers in their use of relational data, CLARIAH-VL+ & Artes Research (partners in DH@rts) are hosting 2 Nodegoat workshops.
Nodegoat is a web-based research environment designed for the Humanities. The platform enables researchers to manage and visualize complex historical data, including vague dates and historical regions, as well as to generate diachronic geographical and social network visualizations.
During the workshop, participants will learn how to use this flexible digital environment for their own projects.
Program
The workshops will be given by Geert Kessels & Pim van Bree (the developers of LAB1100).
The morning session (09:30-12:30) will cover a general introduction to Nodegoat
During the afternoon session (14:00-17:00) the developers will present more advanced Nodegoat features.
You may sign up for just the morning session, just the afternoon session, or both workshops. Just make sure to register for each session individually.
Practicalities
When: April 24, 2026 from 09:30 to 12:30 and from 14:00-17:00
Where: Colloquium (05.28) in the University Library. These are in-person workshops and will not be recorded.
For who: This event is open to KU Leuven researchers working in the Humanities. No prior experience is required. Participants are encouraged to bring their own research questions or datasets to explore within Nodegoat
Price and registration: Free but mandatory. You can register here. You may sign up for just the morning session, just the afternoon session, or both workshops. Just make sure to register for each session individually. Registration deadline is 10 April 2026.
Every academic year, the HDYDI (How Do You Do (It)?) event on research data workflows signals the start of the Digital Scholarship Module. Through a series of sessions and (mini-)workshops, Artes Research aims to guide students through the complexities of scholarship in the digital age, from Open Science to Research Data Management and beyond.
At the HDYDI kick-off event, we invite three researchers from the Faculty of Arts to open the black box of their research workflows. By sharing the practical tools, decisions, and challenges that shape their day‑to‑day work, they aim to offer the first-year PhD researchers a realistic insight into what digital scholarship can look like across disciplines. We hope these behind‑the‑scenes glimpses help you discover approaches that can inform your own research journey!
Tim Debroyer: From Paper to Digital Source
The first speaker, Tim Debroyer, is a third-year PhD candidate at the Cultural History since 1750 research group. Under the supervision of Joris Vandendriessche and Kaat Wils, Tim is studying the evolution of 20th-century Belgian patient organisations as an overlooked link in the development of the modern welfare state. This involves examining their oral history as well as archival and published sources.
The focus of Tim’s talk is on the latter – periodicals specifically form one of the most important sources of information for his project. Faced with thousands of pages early on in his research project, he had to make strategic decisions: what to photograph, how to photograph it, and which digital methods were worth the investment.
Taking BVS Nieuws, the periodical of a diabetes association founded in the 1940s, as an example, Tim explains that he ended up manually photographing the entire series of journals so as to allow for a more thorough discourse analysis. This experience taught him some “tricks” which might be useful to others looking to photograph large amounts of text. Firstly, he used a classic camera in order to avoid the post-processing which smartphones tend to apply, and which can harm OCR quality. Secondly, he made sure to always photograph beyond the edges of the page to make it easier for the OCR software to recognize the boundaries. Thirdly, since taking pictures in the library was quite hectic, Tim always made notes of what he was doing: for instance, what stood out in the issues and what was missing – this made it much easier to return to the sources later on in his trajectory.
Once he properly organized the resulting pictures in folders per issue or volume with short, meaningful names, Tim set to extract the text using OCR (Optical Text Recognition) tools in order to enable keyword searches and quantitative analysis. (This is a labor-intensive step, he cautions, so make sure that it makes sense for your methodology before adopting it yourself.) Numerous scanning apps and online tools exist – Tesseract, Google Cloud Vision and Transkribus (for handwritten text) are great options for the more technically minded – but Tim made use of ABBYY FineReader, a commonly used OCR tool that is very performant and user-friendly. It is a commercial tool, but computers with ABBYY licenses are available at the Maurits Sabbe Library and Agora, so researchers looking to digitize a limited number of sources are free to go there without having to purchase their own license. ABBYY FineReader allows for image pre-processing (e.g. fixing lighting, straightening and cropping pictures), supports various languages, recognizes images in sources as well, and offers various formats for exporting (including .txt files). Tim was quite satisfied with the quality of the OCR’d texts: take good pictures, he says, and ABBYY will deliver good results!
To conclude, Tim shows how he processed the resulting text files in AntConc, a free concordance tool that’s often used for text mining. It allows for large-scale word searching and analysis, can provide keyword frequencies and information about relations to other words, and can easily compare different corpora. (Tim provides a small tip for those looking to explore AntConc: keep a stopword list of high-frequency words with little thematic content that the tool can filter out of its analysis.)
Of course, every researcher has to figure out what workflow suits them, but Tim importantly highlights that you should think about what you want to achieve before investing in digital methods. Consider the nature of your research project, the characteristics of your source corpus, the methodologies you use (discourse analysis, quantitative analysis, network & visual analysis) and let these things decide how you will process and study your sources. At the same time, don’t be afraid to try out new tools that might work well for you!
Of course, the quality of ABBYY FineReader’s OCR results depends on the quality of the input images.
Lauren Ottaviani: Mapping and Analyzing Women’s Magazine Archives
Our second speaker is Lauren Ottaviani, fourth-year PhD candidate in English Literature. Lauren’s project, supervised by Elke D’hoker, focuses on the representation of the women’s suffrage movement in two conservative, middlebrow periodicals dating to the late 19th and early 20th centuries: The Woman at Home and Lady of the House. In doing so, the research seeks to consider the interaction between suffrage and domestic ideals at the turn of the twentieth century.
Similarly to Tim, then, Lauren also works with a large corpus of periodicals; and just as we saw with Tim, many of the magazines’ issues – which tend to be quite lengthy – remained as yet undigitized. The complexity of her materials meant that Lauren had to decide early on how to approach data management efficiently. In the end, a combination of three tools informed her research workflow.
Firstly, early on, she shifted from using Word for note-taking to using the free open-source tool Obsidian instead. As Lauren says, Obsidian (which was covered in last year’s HDYDI session as well) has the same ease of use that a program like Word offers, but you’ll actually be able to find your note again! With its added functionality, Obsidian allowed her to create a relational database of notes categorized by date, theme, or type, so as to keep track of any stories worth revisiting. Through tags and linked notes, Lauren could keep track of authorship, include direct links to the digitized magazine pages, and even uncover recurring anonymous authors. It’s also just a great tool for conference notes and miscellaneous admin.
Secondly, Lauren made use of the storage that’s provided by KU Leuven on OneDrive for Business. Currently, OneDrive is no longer recommended as a primary storage solution for research data at the university,1 but it does have some useful features – and it proved particularly handy for Lauren’s use case. Using the OneDrive smartphone app, she took pictures of interesting articles in the periodicals she was studying and placed those in her pre-organized folder structure. In contrast to Tim, Lauren did not think full OCR of her corpus was worth the time investment or really relevant to her research questions, but this smaller-scale scanning process (which resulted in perfectly legible captures) worked great for her methodology.
Thirdly and finally, Lauren also adopted Nodegoat as part of her workflow, mainly for its “mapping” potential. That is, Nodegoat is a database tool, but it also offers built-in network visualization capabilities, which Lauren used to map out different entries – i.e. letters from the magazines’ correspondence columns – tagged with geolocations. The resulting visualization allowed her to track where readers lived, what the magazines’ geographical reach was, and how their readership expanded over time – elements that were central to her analysis of the periodicals’ circulation.
Using a combination of these three tools, Lauren was able to create a structured, well-organized database out of a vast, undigitized corpus; and even though her approach differed quite substantially from that of Tim, both illustrate how the right tools, used well, help make large-scale periodical research manageable.
Using Nodegoat, Lauren was able to map out the readership of the periodicals she’s studying.
Sinem Bilican: Managing Multimodal Data in Healthcare Research
Sinem Bilican is the last speaker: as a PhD candidate at the Research Unit Translation & Interpreting Studies, she is part of the interdisciplinary research project Managing Language Barriers in Unplanned Care (MaLBUC). With the help of her supervisor Heidi Salaets, Sinem studies linguistic diversity and multilingual communication in healthcare practices with the goal of laying bare overlooked communication barriers. As such, her project involves collaboration with the Faculty of Medicine, and we can reasonably expect very different data types from what we saw in Tim’s and Lauren’s presentations.
Indeed, the interdisciplinary and collaborative nature of the research project – which encompasses ethnographic observations as well as a large-scale survey and interviews – necessitates the implementation of clear research data management practices. Sinem works with extensive field notes, images, video and audio recordings, questionnaires, and other survey data: a lot of materials to manage, to be sure!
Sinem begins by outlining the tools involved in her daily research workflow. Zotero is a usual suspect here, and one which we see in many researchers’ workflows as a handy reference manager as well as a note-taking and annotation tool. OneDrive, meanwhile, enables Sinem to exchange data, drafts and other documents transparently between team members; whereas for a related larger-scale project, the team opted for the ease of use of Teams and SharePoint (which is a recommended storage solution at the Faculty of Arts). Finally, Obsidian is mentioned again, and Sinem stresses its convenience for taking both academic and miscellaneous notes.
Next, Sinem presents some of the tools she used during the data collection phase of her research project. Interestingly, the first tool she talks about is an actual physical tool: a Livescribe pen. This smart pen with a built-in recorder synchronizes handwritten notes with audio, allowing Sinem to easily reconstruct interviews and medical consultations she attended2 – after a day of fieldwork, you can just plug it into your laptop and have everything appear in the Livescribe app. For the surveys, Sinem uses REDCap, which is commonly used in the Biomedical Sciences: it is a highly secure, KU Leuven-authenticated tool that can automatically generate full survey reports. It is, as Sinem points out, also quite a technical tool, but the university provides comprehensive support for users.
The last tool Sinem considers takes us from data collection to research dissemination – namely, Canva. Canva is a user-friendly, web-based design platform that’s great for making posters, visuals, and any other materials you might need to present your research. It allows for image upscaling, QR-code generation, and even themed PowerPoint slide decks. Sinem’s enthusiasm for Canva is infectious – and fittingly, she used it to create her HDYDI presentation as well!
By combining these tools, Sinem is able to navigate a complex, interdisciplinary project that involves varied datasets with clarity and structure; and while her workflow differs markedly from those of Tim and Lauren, it likewise shows how thoughtful tool choices can make even the most challenging research environments manageable.
REDCap proved a useful tool for Sinem’s research data workflow.
Across all three presentations, the workflows we saw revealed both overlaps and differences, but the shared message was clear: the best workflow is the one that genuinely works for your project. Let these examples inspire you, try out the tools that seem useful, and keep what supports your work. With a bit of exploration, you may find a data workflow that not only suits your project, but strengthens it!
As explained in the university’s storage solution FAQ, there are a number of reasons why OneDrive is no longer recommended as a primary solution for long-term research data storage; most significantly the fact that data stored on OneDrive servers is inaccessible to KU Leuven, which goes against RDM policy (principle II). This means that any data that you’ve kept on OneDrive is erased as soon as you leave the university for any reason, and recovering files is a difficult and costly procedure. ︎
Of course, these recordings were made with informed consent of all involved. ︎