普通视图

Received before yesterday

Recap: How do you do it? A behind-the-scenes look at research workflows (2025)

2026年2月27日 18:18

Every academic year, the HDYDI (How Do You Do (It)?) event on research data workflows signals the start of the Digital Scholarship Module. Through a series of sessions and (mini-)workshops, Artes Research aims to guide students through the complexities of scholarship in the digital age, from Open Science to Research Data Management and beyond.

At the HDYDI kick-off event, we invite three researchers from the Faculty of Arts to open the black box of their research workflows. By sharing the practical tools, decisions, and challenges that shape their day‑to‑day work, they aim to offer the first-year PhD researchers a realistic insight into what digital scholarship can look like across disciplines. We hope these behind‑the‑scenes glimpses help you discover approaches that can inform your own research journey!


Tim Debroyer: From Paper to Digital Source

The first speaker, Tim Debroyer, is a third-year PhD candidate at the Cultural History since 1750 research group. Under the supervision of Joris Vandendriessche and Kaat Wils, Tim is studying the evolution of 20th-century Belgian patient organisations as an overlooked link in the development of the modern welfare state. This involves examining their oral history as well as archival and published sources.

The focus of Tim’s talk is on the latter – periodicals specifically form one of the most important sources of information for his project. Faced with thousands of pages early on in his research project, he had to make strategic decisions: what to photograph, how to photograph it, and which digital methods were worth the investment.

Taking BVS Nieuws, the periodical of a diabetes association founded in the 1940s, as an example, Tim explains that he ended up manually photographing the entire series of journals so as to allow for a more thorough discourse analysis. This experience taught him some “tricks” which might be useful to others looking to photograph large amounts of text. Firstly, he used a classic camera in order to avoid the post-processing which smartphones tend to apply, and which can harm OCR quality. Secondly, he made sure to always photograph beyond the edges of the page to make it easier for the OCR software to recognize the boundaries. Thirdly, since taking pictures in the library was quite hectic, Tim always made notes of what he was doing: for instance, what stood out in the issues and what was missing – this made it much easier to return to the sources later on in his trajectory.

Once he properly organized the resulting pictures in folders per issue or volume with short, meaningful names, Tim set to extract the text using OCR (Optical Text Recognition) tools in order to enable keyword searches and quantitative analysis. (This is a labor-intensive step, he cautions, so make sure that it makes sense for your methodology before adopting it yourself.) Numerous scanning apps and online tools exist – Tesseract, Google Cloud Vision and Transkribus (for handwritten text) are great options for the more technically minded – but Tim made use of ABBYY FineReader, a commonly used OCR tool that is very performant and user-friendly. It is a commercial tool, but computers with ABBYY licenses are available at the Maurits Sabbe Library and Agora, so researchers looking to digitize a limited number of sources are free to go there without having to purchase their own license. ABBYY FineReader allows for image pre-processing (e.g. fixing lighting, straightening and cropping pictures), supports various languages, recognizes images in sources as well, and offers various formats for exporting (including .txt files). Tim was quite satisfied with the quality of the OCR’d texts: take good pictures, he says, and ABBYY will deliver good results!

To conclude, Tim shows how he processed the resulting text files in AntConc, a free concordance tool that’s often used for text mining. It allows for large-scale word searching and analysis, can provide keyword frequencies and information about relations to other words, and can easily compare different corpora. (Tim provides a small tip for those looking to explore AntConc: keep a stopword list of high-frequency words with little thematic content that the tool can filter out of its analysis.)

Of course, every researcher has to figure out what workflow suits them, but Tim importantly highlights that you should think about what you want to achieve before investing in digital methods. Consider the nature of your research project, the characteristics of your source corpus, the methodologies you use (discourse analysis, quantitative analysis, network & visual analysis) and let these things decide how you will process and study your sources. At the same time, don’t be afraid to try out new tools that might work well for you!

Of course, the quality of ABBYY FineReader's OCR results depends on the quality of the input images.

Of course, the quality of ABBYY FineReader’s OCR results depends on the quality of the input images.


Lauren Ottaviani: Mapping and Analyzing Women’s Magazine Archives

Our second speaker is Lauren Ottaviani, fourth-year PhD candidate in English Literature. Lauren’s project, supervised by Elke D’hoker, focuses on the representation of the women’s suffrage movement in two conservative, middlebrow periodicals dating to the late 19th and early 20th centuries: The Woman at Home and Lady of the House. In doing so, the research seeks to consider the interaction between suffrage and domestic ideals at the turn of the twentieth century.

Similarly to Tim, then, Lauren also works with a large corpus of periodicals; and just as we saw with Tim, many of the magazines’ issues – which tend to be quite lengthy – remained as yet undigitized. The complexity of her materials meant that Lauren had to decide early on how to approach data management efficiently. In the end, a combination of three tools informed her research workflow.

Firstly, early on, she shifted from using Word for note-taking to using the free open-source tool Obsidian instead. As Lauren says, Obsidian (which was covered in last year’s HDYDI session as well) has the same ease of use that a program like Word offers, but you’ll actually be able to find your note again! With its added functionality, Obsidian allowed her to create a relational database of notes categorized by date, theme, or type, so as to keep track of any stories worth revisiting. Through tags and linked notes, Lauren could keep track of authorship, include direct links to the digitized magazine pages, and even uncover recurring anonymous authors. It’s also just a great tool for conference notes and miscellaneous admin.

Secondly, Lauren made use of the storage that’s provided by KU Leuven on OneDrive for Business. Currently, OneDrive is no longer recommended as a primary storage solution for research data at the university,1 but it does have some useful features – and it proved particularly handy for Lauren’s use case. Using the OneDrive smartphone app, she took pictures of interesting articles in the periodicals she was studying and placed those in her pre-organized folder structure. In contrast to Tim, Lauren did not think full OCR of her corpus was worth the time investment or really relevant to her research questions, but this smaller-scale scanning process (which resulted in perfectly legible captures) worked great for her methodology.

Thirdly and finally, Lauren also adopted Nodegoat as part of her workflow, mainly for its “mapping” potential. That is, Nodegoat is a database tool, but it also offers built-in network visualization capabilities, which Lauren used to map out different entries – i.e. letters from the magazines’ correspondence columns – tagged with geolocations. The resulting visualization allowed her to track where readers lived, what the magazines’ geographical reach was, and how their readership expanded over time – elements that were central to her analysis of the periodicals’ circulation.

Using a combination of these three tools, Lauren was able to create a structured, well-organized database out of a vast, undigitized corpus; and even though her approach differed quite substantially from that of Tim, both illustrate how the right tools, used well, help make large-scale periodical research manageable.

Using Nodegoat, Lauren was able to map out the readership of the periodicals she's studying.

Using Nodegoat, Lauren was able to map out the readership of the periodicals she’s studying.


Sinem Bilican: Managing Multimodal Data in Healthcare Research

Sinem Bilican is the last speaker: as a PhD candidate at the Research Unit Translation & Interpreting Studies, she is part of the interdisciplinary research project Managing Language Barriers in Unplanned Care (MaLBUC). With the help of her supervisor Heidi Salaets, Sinem studies linguistic diversity and multilingual communication in healthcare practices with the goal of laying bare overlooked communication barriers. As such, her project involves collaboration with the Faculty of Medicine, and we can reasonably expect very different data types from what we saw in Tim’s and Lauren’s presentations.

Indeed, the interdisciplinary and collaborative nature of the research project – which encompasses ethnographic observations as well as a large-scale survey and interviews – necessitates the implementation of clear research data management practices. Sinem works with extensive field notes, images, video and audio recordings, questionnaires, and other survey data: a lot of materials to manage, to be sure!

Sinem begins by outlining the tools involved in her daily research workflow. Zotero is a usual suspect here, and one which we see in many researchers’ workflows as a handy reference manager as well as a note-taking and annotation tool. OneDrive, meanwhile, enables Sinem to exchange data, drafts and other documents transparently between team members; whereas for a related larger-scale project, the team opted for the ease of use of Teams and SharePoint (which is a recommended storage solution at the Faculty of Arts). Finally, Obsidian is mentioned again, and Sinem stresses its convenience for taking both academic and miscellaneous notes.

Next, Sinem presents some of the tools she used during the data collection phase of her research project. Interestingly, the first tool she talks about is an actual physical tool: a Livescribe pen. This smart pen with a built-in recorder synchronizes handwritten notes with audio, allowing Sinem to easily reconstruct interviews and medical consultations she attended2 – after a day of fieldwork, you can just plug it into your laptop and have everything appear in the Livescribe app. For the surveys, Sinem uses REDCap, which is commonly used in the Biomedical Sciences: it is a highly secure, KU Leuven-authenticated tool that can automatically generate full survey reports. It is, as Sinem points out, also quite a technical tool, but the university provides comprehensive support for users.

The last tool Sinem considers takes us from data collection to research dissemination – namely, Canva. Canva is a user-friendly, web-based design platform that’s great for making posters, visuals, and any other materials you might need to present your research. It allows for image upscaling, QR-code generation, and even themed PowerPoint slide decks. Sinem’s enthusiasm for Canva is infectious – and fittingly, she used it to create her HDYDI presentation as well!

By combining these tools, Sinem is able to navigate a complex, interdisciplinary project that involves varied datasets with clarity and structure; and while her workflow differs markedly from those of Tim and Lauren, it likewise shows how thoughtful tool choices can make even the most challenging research environments manageable.

REDCap proved a useful tool for Sinem's research data workflow.

REDCap proved a useful tool for Sinem’s research data workflow.


Across all three presentations, the workflows we saw revealed both overlaps and differences, but the shared message was clear: the best workflow is the one that genuinely works for your project. Let these examples inspire you, try out the tools that seem useful, and keep what supports your work. With a bit of exploration, you may find a data workflow that not only suits your project, but strengthens it!


  1. As explained in the university’s storage solution FAQ, there are a number of reasons why OneDrive is no longer recommended as a primary solution for long-term research data storage; most significantly the fact that data stored on OneDrive servers is inaccessible to KU Leuven, which goes against RDM policy (principle II). This means that any data that you’ve kept on OneDrive is erased as soon as you leave the university for any reason, and recovering files is a difficult and costly procedure. ↩
  2. Of course, these recordings were made with informed consent of all involved. ↩

Recap: How do you do it? A behind-the-scenes look at research workflows (2024)

2024年12月12日 00:02

Every academic year, the HDYDI (How Do You Do It?) event on research data workflows signals the start of the Digital Scholarship Module. Through a series of sessions and (mini-)workshops, Artes Research aims to guide students through the complexities of scholarship in the digital age, from Open Science to Research Data Management and beyond. At the HDYDI kick-off event, three researchers from the Faculty of Arts lift the curtain on their own research workflow and offer a behind-the-scenes look at the ways in which they approach their research, the data they engage with, and the tools they use in doing so. The goal of this session is to provide examples of more advanced workflows for the first-year PhD researchers as they embark on their own research journey. Hopefully this recap of the session can spark some inspiration for you!


Seb Verlinden – Using Obsidian as a note-taking tool for literature

The first speaker, Seb Verlinden, is a second-year PhD candidate in medieval history. Under the supervision of Maïka De Keyzer and Bart Vanmontfort, Seb is studying the long-term landscape changes – mainly in the form of gradual desertification – that characterize the Campine region, one of the driest areas in Belgium. Particular focus is on the impact of eighteenth-century drainage in the region.

Seb’s talk concerns an issue that all researchers can relate to, regardless of the relative complexity of their project – that of taking notes. It is true, as Seb highlights, that every researcher has their own unique workflow, often relying on a combination of tools that makes sense for them (in his case, QGIS, FileMaker Pro, MAXQDA, and spreadsheet software). But at the heart of any research process is the need to organize one’s thoughts, and this is where note-taking apps can make a real difference. So, what are some of the options out there?

Zotero is a possible solution – one we’ve already discussed elsewhere on this blog. As a reference manager first and foremost, Zotero has the potential to become a researcher’s living library, a knowledge base covering all relevant literature. It also has great capabilities for annotating PDFs, especially with its new 7.0 update. What you’re missing in the context of note-taking, however, is the big picture. Seb aptly points out that using Zotero to make notes is like putting post-its in books: you have no real overarching structure, and no way to easily link notes across books.

Other tools are likewise flawed. Lots of researchers use Microsoft Word to take notes, even though it is primarily tailored to mid-length longform text. As a result, it is easy to lose track of notes, unless you’re willing to navigate multiple files; and it tends to grow slow and cumbersome, since it is occupied with layout. It is, simply put, unintuitive for this purpose.

This is why Seb puts forward another solution, one that he believes to be faster, better automated, and easier to use: Obsidian. A widely supported and free tool, Obsidian does have its advantages: in contrast to both Microsoft Word and Zotero, it uses open-source file formats (.md or Markdown files, written in an accessible markup language) and it is full-text searchable and provides a structured overview of notes. Moreover, it offers a versatile workspace, allowing you to go as simple or as complex as you like – especially with the addition of supported plugins. One such plugin, in fact, allows your Obsidian environment to easily interoperate with your Zotero library (including references, bibliographies, and PDF annotations), which is particularly useful.

Seb ends his talk by highlighting another key benefit in using Obsidian. By introducing links in your notes, it is possible to cross-reference other notes within your system with minimal user effort; and through the use of tags, you can generate another layer of structure. Obsidian then uses this information to visualize the relations between your different notes, automatically creating a network of clusters that correspond to certain topics of interest. This way, it expands the possibilities of the data without the need for the researcher to make any real effort – a great reason to think about using Obsidian for your own note-taking needs!

Seb showcased his own network of notes, automatically clustered by Obsidian. This way, he can visually grasp the connections between different topics of interest!

Laura Soffiantini Managing linguistic and historical data. A PhD workflow using FileMaker

Laura Soffiantini is the second speaker: as a PhD researcher at the Cultural Studies Research Group, she is currently analyzing the geographical representation of Greece in Pliny the Elder’s Naturalis Historia. With the help of her supervisor Margherita Fantoli, Laura intends to shed new light on the way in which Greece was perceived in Flavian-era Rome. In order to do so, she has to manage a varied mix of linked data – textual, linguistic, and historical – as part of her daily routine.

Grappling with 37 books of a classical encyclopedia, and dealing with data in different formats and with different qualities (actual text, numeric coordinates, symbols, etc.), Laura realized the importance of proper Research Data Management. It enables aggregating, manipulating, analyzing, and comparing your data more efficiently throughout – and even beyond – the research process. Indeed, a challenge faced by many researchers is the retrieval of data collected or processed at an earlier time, with the aim of relating it to “new” data. In this context, Laura provides a look at her own research workflow.

The primary strategy in managing your data, she remarks, is to structure it. By adding structure to your data, you can parse it more easily and return to it without issues, even in later phases of your project. Software like Obsidian is indispensable for this purpose, but it’s also good to think about using tabular formats like .csv (an open plain text format) as a way to organize your data. A useful tool put forward here is pandas, a Python library designed to help manage and analyze data derived from such .csv files. That might sound technical, but Laura ensures us that – even if you have no background in programming – pandas is a very accessible and convenient tool in handling tabular files.

Having thought about what data she worked with (an essential step for every researcher), Laura adopted an initial workflow in three parts. She first started out with .json files containing Pliny’s text, which she converted into tabular .csv files, adding data related to the lemmatization of the corpus, part-of-speech tagging, and references to book and chapter positions. Subsequently, she thought about grouping this data into different categories, which she assigned to different columns – such that there is a column titled “book_chapter”, one titled “lemma”, and so on. Finally, Laura assigned identifiers to the information contained in these files; she explains she wasn’t aware of the importance of such identifiers at the start of the project, but now realizes they form a crucial part of keeping tabular data.

As a result, Laura ended up with multiple .csv files, which she then related to each other using FileMaker (with the expert assistance of Mark Depauw and Tom Gheldof). One table, for instance, contains a list of all the Latin words used (the tokens, e.g. urbs) alongside their identifier, book number, lemma, and possible identifier linked to the Trismegistos database of ancient texts. Another contains the lemma along with its part-of-speech tag (e.g. proper noun) and meaning (e.g. “city”). By linking the different files through the use of identifiers – the keys to the data – Laura made a relational database easily managed and organized through FileMaker. The resulting dataset is at the core of her research project.

The main takeaway Laura wants to leave us with is that it is important to create an environment in which you can efficiently collect, store, manipulate, and analyze your data. This should not come at the cost of traditional approaches and methodologies – in fact, you can add to them to create a better workflow as a whole!

Laura showed us some examples of how she used specific identifiers to connect tabular files and create a relational database in FileMaker.

Zakaria El Houbba Obsidian as part of the research workflow

The third and final speaker is Zakaria El Houbba, third-year PhD candidate in Arabic Studies. Zakaria’s project, supervised by Arjan Post, focuses on the pre-modern relation between Islamic jurisprudence and Sufism, and in particular on the way in which these two strands are united in the figure of Aḥmad Zarrūq. In doing so, the research aims to come to a theory of applied legal epistemology in Zarrūq’s Sufism.

By discussing his own workflow in detail, Zakaria intends to highlight a number of key takeaways revolving around the idea of the “second brain”. Because we are so deeply involved with knowledge gathering on a daily basis, and constantly receive input from various sources (whether academic or not), we run the risk of being overwhelmed by a flood of information. When you use software to carry that burden for you, you can save your own brainpower for actual critical thinking rather than secondary tasks like categorizing information. This way, you’re effectively constructing what’s referred to as a second brain.

In this context, Zakaria also makes use of Obsidian, though he approaches it from a very different angle than Seb. Zakaria doesn’t actually enter all of his notes into Obsidian – he first uses an app like Microsoft OneNote as a “vault” to record random, non-processed thoughts, which he periodically goes through to think about how they fit in his project. He then sorts these thoughts and puts them in corresponding folders (relating to certain projects, classes, issues, etc.) in order to process them properly in Obsidian. Zakaria emphasizes that it’s fine to keep it simple and take it slow, focusing on what you specifically need from the note-taking environment so as not to get overwhelmed by all the options and information.

There are more tools Zakaria uses in his workflow – in fact, he says, there is a constant conversation between himself, Obsidian, Zotero, and ChatGPT. He uses Zotero to make notes and highlight text when reading articles, which he imports into Obsidian and categorizes using tags. Afterwards, he copies those highlights from Obsidian into ChatGPT, asking it to take up the role of copy editor and summarize the text. The resulting summary, which he critically revises, is then given a place in Obsidian once again.

Next to the powerful visualization capabilities discussed by Seb, Zakaria explains that Obsidian can also be used to create subpages within notes to explain terms and concepts, provide brief biographies of important figures, and so on. These “subnotes” can be linked back to in other notes as well, resulting in a kind of personalized Wikipedia for your research topic. This can also be helpful when you’re following classes on a certain topic or revising your own teaching material!

Finally, speaking of teaching material, Zakaria points us to a couple of helpful AI tools that can be used to process video files, such as recorded lectures or talks – whether you attended them or gave them yourself. One such tool is NoteGPT, which essentially functions as a transcriber and summarizer of recordings. You can revise and copy the resulting transcriptions and summaries into Obsidian as well, further expanding the scope of your second brain. Brisk Teaching serves a similar purpose as NoteGPT, but can also be used to turn a video into a PowerPoint presentation, which can be very convenient and time-saving. By thus constructing a workflow, gradually accumulating relevant information through different tools, it becomes much easier to manage your research.

The home tab of Zakaria’s Obsidian environment. As both he and Seb explained, you can make it as simple or complex as you like – try to make it a welcoming space for your daily research workflow!

The workflows of the presenters reveal both similarities and differences, but there’s one thing all three can agree on – what’s important is to find a workflow that works for you. To that end, take inspiration from some of the tools and processes described here, but always make sure they support your specific research methods. This was emphasized in the questions as well: don’t feel pressured to adopt a tool like Obsidian, but try it out and see if it accommodates your needs. Who knows, you might uncover a more efficient workflow or see your data from a new perspective.

Happy holidays from the Artes Research team, and may your data be blessed in the year to come! 🎄

❌