普通视图

Received before yesterday

Recap: How do you do it? A behind-the-scenes look at research workflows (2025)

2026年2月27日 18:18

Every academic year, the HDYDI (How Do You Do (It)?) event on research data workflows signals the start of the Digital Scholarship Module. Through a series of sessions and (mini-)workshops, Artes Research aims to guide students through the complexities of scholarship in the digital age, from Open Science to Research Data Management and beyond.

At the HDYDI kick-off event, we invite three researchers from the Faculty of Arts to open the black box of their research workflows. By sharing the practical tools, decisions, and challenges that shape their day‑to‑day work, they aim to offer the first-year PhD researchers a realistic insight into what digital scholarship can look like across disciplines. We hope these behind‑the‑scenes glimpses help you discover approaches that can inform your own research journey!


Tim Debroyer: From Paper to Digital Source

The first speaker, Tim Debroyer, is a third-year PhD candidate at the Cultural History since 1750 research group. Under the supervision of Joris Vandendriessche and Kaat Wils, Tim is studying the evolution of 20th-century Belgian patient organisations as an overlooked link in the development of the modern welfare state. This involves examining their oral history as well as archival and published sources.

The focus of Tim’s talk is on the latter – periodicals specifically form one of the most important sources of information for his project. Faced with thousands of pages early on in his research project, he had to make strategic decisions: what to photograph, how to photograph it, and which digital methods were worth the investment.

Taking BVS Nieuws, the periodical of a diabetes association founded in the 1940s, as an example, Tim explains that he ended up manually photographing the entire series of journals so as to allow for a more thorough discourse analysis. This experience taught him some “tricks” which might be useful to others looking to photograph large amounts of text. Firstly, he used a classic camera in order to avoid the post-processing which smartphones tend to apply, and which can harm OCR quality. Secondly, he made sure to always photograph beyond the edges of the page to make it easier for the OCR software to recognize the boundaries. Thirdly, since taking pictures in the library was quite hectic, Tim always made notes of what he was doing: for instance, what stood out in the issues and what was missing – this made it much easier to return to the sources later on in his trajectory.

Once he properly organized the resulting pictures in folders per issue or volume with short, meaningful names, Tim set to extract the text using OCR (Optical Text Recognition) tools in order to enable keyword searches and quantitative analysis. (This is a labor-intensive step, he cautions, so make sure that it makes sense for your methodology before adopting it yourself.) Numerous scanning apps and online tools exist – Tesseract, Google Cloud Vision and Transkribus (for handwritten text) are great options for the more technically minded – but Tim made use of ABBYY FineReader, a commonly used OCR tool that is very performant and user-friendly. It is a commercial tool, but computers with ABBYY licenses are available at the Maurits Sabbe Library and Agora, so researchers looking to digitize a limited number of sources are free to go there without having to purchase their own license. ABBYY FineReader allows for image pre-processing (e.g. fixing lighting, straightening and cropping pictures), supports various languages, recognizes images in sources as well, and offers various formats for exporting (including .txt files). Tim was quite satisfied with the quality of the OCR’d texts: take good pictures, he says, and ABBYY will deliver good results!

To conclude, Tim shows how he processed the resulting text files in AntConc, a free concordance tool that’s often used for text mining. It allows for large-scale word searching and analysis, can provide keyword frequencies and information about relations to other words, and can easily compare different corpora. (Tim provides a small tip for those looking to explore AntConc: keep a stopword list of high-frequency words with little thematic content that the tool can filter out of its analysis.)

Of course, every researcher has to figure out what workflow suits them, but Tim importantly highlights that you should think about what you want to achieve before investing in digital methods. Consider the nature of your research project, the characteristics of your source corpus, the methodologies you use (discourse analysis, quantitative analysis, network & visual analysis) and let these things decide how you will process and study your sources. At the same time, don’t be afraid to try out new tools that might work well for you!

Of course, the quality of ABBYY FineReader's OCR results depends on the quality of the input images.

Of course, the quality of ABBYY FineReader’s OCR results depends on the quality of the input images.


Lauren Ottaviani: Mapping and Analyzing Women’s Magazine Archives

Our second speaker is Lauren Ottaviani, fourth-year PhD candidate in English Literature. Lauren’s project, supervised by Elke D’hoker, focuses on the representation of the women’s suffrage movement in two conservative, middlebrow periodicals dating to the late 19th and early 20th centuries: The Woman at Home and Lady of the House. In doing so, the research seeks to consider the interaction between suffrage and domestic ideals at the turn of the twentieth century.

Similarly to Tim, then, Lauren also works with a large corpus of periodicals; and just as we saw with Tim, many of the magazines’ issues – which tend to be quite lengthy – remained as yet undigitized. The complexity of her materials meant that Lauren had to decide early on how to approach data management efficiently. In the end, a combination of three tools informed her research workflow.

Firstly, early on, she shifted from using Word for note-taking to using the free open-source tool Obsidian instead. As Lauren says, Obsidian (which was covered in last year’s HDYDI session as well) has the same ease of use that a program like Word offers, but you’ll actually be able to find your note again! With its added functionality, Obsidian allowed her to create a relational database of notes categorized by date, theme, or type, so as to keep track of any stories worth revisiting. Through tags and linked notes, Lauren could keep track of authorship, include direct links to the digitized magazine pages, and even uncover recurring anonymous authors. It’s also just a great tool for conference notes and miscellaneous admin.

Secondly, Lauren made use of the storage that’s provided by KU Leuven on OneDrive for Business. Currently, OneDrive is no longer recommended as a primary storage solution for research data at the university,1 but it does have some useful features – and it proved particularly handy for Lauren’s use case. Using the OneDrive smartphone app, she took pictures of interesting articles in the periodicals she was studying and placed those in her pre-organized folder structure. In contrast to Tim, Lauren did not think full OCR of her corpus was worth the time investment or really relevant to her research questions, but this smaller-scale scanning process (which resulted in perfectly legible captures) worked great for her methodology.

Thirdly and finally, Lauren also adopted Nodegoat as part of her workflow, mainly for its “mapping” potential. That is, Nodegoat is a database tool, but it also offers built-in network visualization capabilities, which Lauren used to map out different entries – i.e. letters from the magazines’ correspondence columns – tagged with geolocations. The resulting visualization allowed her to track where readers lived, what the magazines’ geographical reach was, and how their readership expanded over time – elements that were central to her analysis of the periodicals’ circulation.

Using a combination of these three tools, Lauren was able to create a structured, well-organized database out of a vast, undigitized corpus; and even though her approach differed quite substantially from that of Tim, both illustrate how the right tools, used well, help make large-scale periodical research manageable.

Using Nodegoat, Lauren was able to map out the readership of the periodicals she's studying.

Using Nodegoat, Lauren was able to map out the readership of the periodicals she’s studying.


Sinem Bilican: Managing Multimodal Data in Healthcare Research

Sinem Bilican is the last speaker: as a PhD candidate at the Research Unit Translation & Interpreting Studies, she is part of the interdisciplinary research project Managing Language Barriers in Unplanned Care (MaLBUC). With the help of her supervisor Heidi Salaets, Sinem studies linguistic diversity and multilingual communication in healthcare practices with the goal of laying bare overlooked communication barriers. As such, her project involves collaboration with the Faculty of Medicine, and we can reasonably expect very different data types from what we saw in Tim’s and Lauren’s presentations.

Indeed, the interdisciplinary and collaborative nature of the research project – which encompasses ethnographic observations as well as a large-scale survey and interviews – necessitates the implementation of clear research data management practices. Sinem works with extensive field notes, images, video and audio recordings, questionnaires, and other survey data: a lot of materials to manage, to be sure!

Sinem begins by outlining the tools involved in her daily research workflow. Zotero is a usual suspect here, and one which we see in many researchers’ workflows as a handy reference manager as well as a note-taking and annotation tool. OneDrive, meanwhile, enables Sinem to exchange data, drafts and other documents transparently between team members; whereas for a related larger-scale project, the team opted for the ease of use of Teams and SharePoint (which is a recommended storage solution at the Faculty of Arts). Finally, Obsidian is mentioned again, and Sinem stresses its convenience for taking both academic and miscellaneous notes.

Next, Sinem presents some of the tools she used during the data collection phase of her research project. Interestingly, the first tool she talks about is an actual physical tool: a Livescribe pen. This smart pen with a built-in recorder synchronizes handwritten notes with audio, allowing Sinem to easily reconstruct interviews and medical consultations she attended2 – after a day of fieldwork, you can just plug it into your laptop and have everything appear in the Livescribe app. For the surveys, Sinem uses REDCap, which is commonly used in the Biomedical Sciences: it is a highly secure, KU Leuven-authenticated tool that can automatically generate full survey reports. It is, as Sinem points out, also quite a technical tool, but the university provides comprehensive support for users.

The last tool Sinem considers takes us from data collection to research dissemination – namely, Canva. Canva is a user-friendly, web-based design platform that’s great for making posters, visuals, and any other materials you might need to present your research. It allows for image upscaling, QR-code generation, and even themed PowerPoint slide decks. Sinem’s enthusiasm for Canva is infectious – and fittingly, she used it to create her HDYDI presentation as well!

By combining these tools, Sinem is able to navigate a complex, interdisciplinary project that involves varied datasets with clarity and structure; and while her workflow differs markedly from those of Tim and Lauren, it likewise shows how thoughtful tool choices can make even the most challenging research environments manageable.

REDCap proved a useful tool for Sinem's research data workflow.

REDCap proved a useful tool for Sinem’s research data workflow.


Across all three presentations, the workflows we saw revealed both overlaps and differences, but the shared message was clear: the best workflow is the one that genuinely works for your project. Let these examples inspire you, try out the tools that seem useful, and keep what supports your work. With a bit of exploration, you may find a data workflow that not only suits your project, but strengthens it!


  1. As explained in the university’s storage solution FAQ, there are a number of reasons why OneDrive is no longer recommended as a primary solution for long-term research data storage; most significantly the fact that data stored on OneDrive servers is inaccessible to KU Leuven, which goes against RDM policy (principle II). This means that any data that you’ve kept on OneDrive is erased as soon as you leave the university for any reason, and recovering files is a difficult and costly procedure. ↩
  2. Of course, these recordings were made with informed consent of all involved. ↩

Training: How Do You Do (It)? A behind-the-scenes look at research workflows (KU Leuven)

2025年9月25日 16:03

This event is only open to KU Leuven researchers and staff.

The Artes Research team from KU Leuven Libraries Artes and the ABAP council will kick off the new academic year with a special “How Do You Do (It)?” (HDYDI) session dedicated to research data workflows. This special session will coincide with the start of the Digital Scholarship Module taught by the Artes Research team. It will take place on Thursday 6 November, 14:00-16:30, in the Justus Lipsiuszaal (Erasmushuis, Leuven).

Everyone is welcome to attend, you do not need to register!

Program

14:00-15:00

To help you through the afternoon slump, we will start with coffee and cookies which will be served in the main entrance hall of the Erasmushuis.

15:00-16h30

We will then move up to the 8th floor (Justus Lipsiuszaal) to start the session which will feature talks from researchers at the Faculty of Arts who outline their research workflows: how do they approach their research, what tools do they use, with what kind of data are they working, etc. We will get a behind-the-scenes look from:

There will be lots of time for questions and getting to know each other’s workflows.

The event will take place in Leuven, but if you would like to join online you can let us know at artesresearch@kuleuven.be and we will provide you with the link.

Practical details

  • When: Thursday 6 November, from 14:00 to 16:30
  • Where: coffee in main entrance hall and session in Justus Lipsiuszaal (Erasmushuis, Leuven) with online option: if you would like to join online you can let us know at artesresearch@kuleuven.be and we will provide you with the link
  • Price: free
  • Registration: no registration required

Recap: How do you do it? A behind-the-scenes look at research workflows (2024)

2024年12月12日 00:02

Every academic year, the HDYDI (How Do You Do It?) event on research data workflows signals the start of the Digital Scholarship Module. Through a series of sessions and (mini-)workshops, Artes Research aims to guide students through the complexities of scholarship in the digital age, from Open Science to Research Data Management and beyond. At the HDYDI kick-off event, three researchers from the Faculty of Arts lift the curtain on their own research workflow and offer a behind-the-scenes look at the ways in which they approach their research, the data they engage with, and the tools they use in doing so. The goal of this session is to provide examples of more advanced workflows for the first-year PhD researchers as they embark on their own research journey. Hopefully this recap of the session can spark some inspiration for you!


Seb Verlinden – Using Obsidian as a note-taking tool for literature

The first speaker, Seb Verlinden, is a second-year PhD candidate in medieval history. Under the supervision of Maïka De Keyzer and Bart Vanmontfort, Seb is studying the long-term landscape changes – mainly in the form of gradual desertification – that characterize the Campine region, one of the driest areas in Belgium. Particular focus is on the impact of eighteenth-century drainage in the region.

Seb’s talk concerns an issue that all researchers can relate to, regardless of the relative complexity of their project – that of taking notes. It is true, as Seb highlights, that every researcher has their own unique workflow, often relying on a combination of tools that makes sense for them (in his case, QGIS, FileMaker Pro, MAXQDA, and spreadsheet software). But at the heart of any research process is the need to organize one’s thoughts, and this is where note-taking apps can make a real difference. So, what are some of the options out there?

Zotero is a possible solution – one we’ve already discussed elsewhere on this blog. As a reference manager first and foremost, Zotero has the potential to become a researcher’s living library, a knowledge base covering all relevant literature. It also has great capabilities for annotating PDFs, especially with its new 7.0 update. What you’re missing in the context of note-taking, however, is the big picture. Seb aptly points out that using Zotero to make notes is like putting post-its in books: you have no real overarching structure, and no way to easily link notes across books.

Other tools are likewise flawed. Lots of researchers use Microsoft Word to take notes, even though it is primarily tailored to mid-length longform text. As a result, it is easy to lose track of notes, unless you’re willing to navigate multiple files; and it tends to grow slow and cumbersome, since it is occupied with layout. It is, simply put, unintuitive for this purpose.

This is why Seb puts forward another solution, one that he believes to be faster, better automated, and easier to use: Obsidian. A widely supported and free tool, Obsidian does have its advantages: in contrast to both Microsoft Word and Zotero, it uses open-source file formats (.md or Markdown files, written in an accessible markup language) and it is full-text searchable and provides a structured overview of notes. Moreover, it offers a versatile workspace, allowing you to go as simple or as complex as you like – especially with the addition of supported plugins. One such plugin, in fact, allows your Obsidian environment to easily interoperate with your Zotero library (including references, bibliographies, and PDF annotations), which is particularly useful.

Seb ends his talk by highlighting another key benefit in using Obsidian. By introducing links in your notes, it is possible to cross-reference other notes within your system with minimal user effort; and through the use of tags, you can generate another layer of structure. Obsidian then uses this information to visualize the relations between your different notes, automatically creating a network of clusters that correspond to certain topics of interest. This way, it expands the possibilities of the data without the need for the researcher to make any real effort – a great reason to think about using Obsidian for your own note-taking needs!

Seb showcased his own network of notes, automatically clustered by Obsidian. This way, he can visually grasp the connections between different topics of interest!

Laura Soffiantini Managing linguistic and historical data. A PhD workflow using FileMaker

Laura Soffiantini is the second speaker: as a PhD researcher at the Cultural Studies Research Group, she is currently analyzing the geographical representation of Greece in Pliny the Elder’s Naturalis Historia. With the help of her supervisor Margherita Fantoli, Laura intends to shed new light on the way in which Greece was perceived in Flavian-era Rome. In order to do so, she has to manage a varied mix of linked data – textual, linguistic, and historical – as part of her daily routine.

Grappling with 37 books of a classical encyclopedia, and dealing with data in different formats and with different qualities (actual text, numeric coordinates, symbols, etc.), Laura realized the importance of proper Research Data Management. It enables aggregating, manipulating, analyzing, and comparing your data more efficiently throughout – and even beyond – the research process. Indeed, a challenge faced by many researchers is the retrieval of data collected or processed at an earlier time, with the aim of relating it to “new” data. In this context, Laura provides a look at her own research workflow.

The primary strategy in managing your data, she remarks, is to structure it. By adding structure to your data, you can parse it more easily and return to it without issues, even in later phases of your project. Software like Obsidian is indispensable for this purpose, but it’s also good to think about using tabular formats like .csv (an open plain text format) as a way to organize your data. A useful tool put forward here is pandas, a Python library designed to help manage and analyze data derived from such .csv files. That might sound technical, but Laura ensures us that – even if you have no background in programming – pandas is a very accessible and convenient tool in handling tabular files.

Having thought about what data she worked with (an essential step for every researcher), Laura adopted an initial workflow in three parts. She first started out with .json files containing Pliny’s text, which she converted into tabular .csv files, adding data related to the lemmatization of the corpus, part-of-speech tagging, and references to book and chapter positions. Subsequently, she thought about grouping this data into different categories, which she assigned to different columns – such that there is a column titled “book_chapter”, one titled “lemma”, and so on. Finally, Laura assigned identifiers to the information contained in these files; she explains she wasn’t aware of the importance of such identifiers at the start of the project, but now realizes they form a crucial part of keeping tabular data.

As a result, Laura ended up with multiple .csv files, which she then related to each other using FileMaker (with the expert assistance of Mark Depauw and Tom Gheldof). One table, for instance, contains a list of all the Latin words used (the tokens, e.g. urbs) alongside their identifier, book number, lemma, and possible identifier linked to the Trismegistos database of ancient texts. Another contains the lemma along with its part-of-speech tag (e.g. proper noun) and meaning (e.g. “city”). By linking the different files through the use of identifiers – the keys to the data – Laura made a relational database easily managed and organized through FileMaker. The resulting dataset is at the core of her research project.

The main takeaway Laura wants to leave us with is that it is important to create an environment in which you can efficiently collect, store, manipulate, and analyze your data. This should not come at the cost of traditional approaches and methodologies – in fact, you can add to them to create a better workflow as a whole!

Laura showed us some examples of how she used specific identifiers to connect tabular files and create a relational database in FileMaker.

Zakaria El Houbba Obsidian as part of the research workflow

The third and final speaker is Zakaria El Houbba, third-year PhD candidate in Arabic Studies. Zakaria’s project, supervised by Arjan Post, focuses on the pre-modern relation between Islamic jurisprudence and Sufism, and in particular on the way in which these two strands are united in the figure of Aḥmad Zarrūq. In doing so, the research aims to come to a theory of applied legal epistemology in Zarrūq’s Sufism.

By discussing his own workflow in detail, Zakaria intends to highlight a number of key takeaways revolving around the idea of the “second brain”. Because we are so deeply involved with knowledge gathering on a daily basis, and constantly receive input from various sources (whether academic or not), we run the risk of being overwhelmed by a flood of information. When you use software to carry that burden for you, you can save your own brainpower for actual critical thinking rather than secondary tasks like categorizing information. This way, you’re effectively constructing what’s referred to as a second brain.

In this context, Zakaria also makes use of Obsidian, though he approaches it from a very different angle than Seb. Zakaria doesn’t actually enter all of his notes into Obsidian – he first uses an app like Microsoft OneNote as a “vault” to record random, non-processed thoughts, which he periodically goes through to think about how they fit in his project. He then sorts these thoughts and puts them in corresponding folders (relating to certain projects, classes, issues, etc.) in order to process them properly in Obsidian. Zakaria emphasizes that it’s fine to keep it simple and take it slow, focusing on what you specifically need from the note-taking environment so as not to get overwhelmed by all the options and information.

There are more tools Zakaria uses in his workflow – in fact, he says, there is a constant conversation between himself, Obsidian, Zotero, and ChatGPT. He uses Zotero to make notes and highlight text when reading articles, which he imports into Obsidian and categorizes using tags. Afterwards, he copies those highlights from Obsidian into ChatGPT, asking it to take up the role of copy editor and summarize the text. The resulting summary, which he critically revises, is then given a place in Obsidian once again.

Next to the powerful visualization capabilities discussed by Seb, Zakaria explains that Obsidian can also be used to create subpages within notes to explain terms and concepts, provide brief biographies of important figures, and so on. These “subnotes” can be linked back to in other notes as well, resulting in a kind of personalized Wikipedia for your research topic. This can also be helpful when you’re following classes on a certain topic or revising your own teaching material!

Finally, speaking of teaching material, Zakaria points us to a couple of helpful AI tools that can be used to process video files, such as recorded lectures or talks – whether you attended them or gave them yourself. One such tool is NoteGPT, which essentially functions as a transcriber and summarizer of recordings. You can revise and copy the resulting transcriptions and summaries into Obsidian as well, further expanding the scope of your second brain. Brisk Teaching serves a similar purpose as NoteGPT, but can also be used to turn a video into a PowerPoint presentation, which can be very convenient and time-saving. By thus constructing a workflow, gradually accumulating relevant information through different tools, it becomes much easier to manage your research.

The home tab of Zakaria’s Obsidian environment. As both he and Seb explained, you can make it as simple or complex as you like – try to make it a welcoming space for your daily research workflow!

The workflows of the presenters reveal both similarities and differences, but there’s one thing all three can agree on – what’s important is to find a workflow that works for you. To that end, take inspiration from some of the tools and processes described here, but always make sure they support your specific research methods. This was emphasized in the questions as well: don’t feel pressured to adopt a tool like Obsidian, but try it out and see if it accommodates your needs. Who knows, you might uncover a more efficient workflow or see your data from a new perspective.

Happy holidays from the Artes Research team, and may your data be blessed in the year to come! 🎄

Training: How Do You Do (It)? A behind-the-scenes look at research workflows (KU Leuven)

2024年10月14日 18:12

This event is only open to KU Leuven researchers and staff.

The Artes Research team from KU Leuven Libraries Artes and the ABAP council will kick off the new academic year with a special “How Do You Do (It)?” (HDYDI) session dedicated to research data workflows. This special session will coincide with the start of the Digital Scholarship Module taught by the Artes Research team. It will take place on Tuesday 5 November, 13h30-16h00, in the Justus Lipsiuszaal (Erasmushuis, Leuven).

Everyone is welcome to attend, you do not need to register!

Program

13h30-14h30

To help you through the afternoon slump, we will start with coffee and cookies which will be served in the main entrance hall of the Erasmushuis.

14h30-16h00

We will then move up to the 8th floor (Justus Lipsiuszaal) to start the session which will feature talks from researchers at the Faculty of Arts who outline their research workflows: how do they approach their research, what tools do they use, with what kind of data are they working, etc. We will get a behind-the-scenes look from:

There will be lots of time for questions and getting to know each other’s workflows.

The event will take place in Leuven, but if you would like to join online you can let us know at artesresearch@kuleuven.be and we will provide you with the link.

Practical details

  • When: Tuesday 5 November, from 13h30 to 16h00
  • Where: coffee in main entrance hall and session in Justus Lipsiuszaal (Erasmushuis, Leuven) with online option: if you would like to join online you can let us know at artesresearch@kuleuven.be and we will provide you with the link
  • Price: free
  • Registration: no registration required

Recap: How do you do it? A behind-the-scenes look at research workflows (2023)

2023年12月6日 17:55

Each academic year, we, at Artes Research, kick-off the Digital Scholarship Module – a training for first-year PhD researchers at the Faculty of Arts – with a session dedicated to research data workflows. Three researchers from the Faculty of Arts offer a behind-the-scenes look at their research workflows by outlining how they approach and structure their research, the tools they use, and with what kind of data they are working. The goal of this session is to provide examples of more advanced workflows for the first-year PhD researchers as they embark on their research journey. Hopefully this recap of the session can spark some inspiration for you!

Vicente Parrilla López – Plain text and structured notetaking

Vicente’s research, which is in the field of musicology, focuses on reviving the Renaissance practice of improvised counterpoint. Apart from a PhD researcher, he is also a musician and recorder player himself. In his research workflow, Vicente consistently seeks out tools to enhance efficiency and further streamline the structure of his work.

Vicente introduced us to the versatility and accessibility of plain text files, highlighting the benefit of this file format, as it is universally usable across various computers and software platforms. One drawback, however, lies in readability due to the absence of text formatting and smaller typography. Fortunately, applications like iA Writer, which allow users to use markdown to apply additional formatting, address this issue.

There are a wide array of digital tools for structured notetaking out there. In addition to iA Writer, other examples include Obsidian and Notion. The key is to choose the tool that suits your needs and preferences best.

Vicente highlights the advantages of using plain text files for structured notetaking in conjunction with applications like iA Writer:

  • Distraction-free writing: plain text notetaking ensures an undisturbed writing experience with basic formatting; once you are finished you can preview your text for example as HTML or PDF output.
  • Versatility: plain text files are very adaptable; they can be exported to various formats such as HTML for websites, DOC for Microsoft Word, PDF, and even transform into programming language files like Python, Java, JSON, CSS, XML, LaTeX, among others.
  • Interconnectedness: notetaking tools like these often incorporate a tagging system that facilitate connections between concepts and ideas.
  • Search capability: these tools also offer robust search functionalities, ensuring swift and efficient retrieval of desired information.
 

An important aspect of Vicente’s notetaking workflow is the integration of structured metadata. Vicente implements a dedicated metadata section at the beginning of each note, enhancing the categorization and contextualization of his notes. In general, adding metadata in a systematic way offers several advantages. By recording key details like creation date, authorship, and related keywords, metadata enriches a note by adding surrounding context. Additionally, metadata enhances searchability by allowing the user to search for specific information or themes across an entire note repository. Lastly, structured metadata can foster collaboration between various users but also across different projects.

Vicente also introduced us to the concept of text expanders. The purpose of this type of software is to replace designated keystrokes, known as ‘shortcuts’ or ‘abbreviations,’ with expanded text segments. Its strength lies in expediting the writing process by swiftly inserting frequently used words or phrases into articles, grant applications, and more. It can also help to easily integrate standardized metadata and bibliographic entries. Using the text expander software allows Vicente to have a streamlined writing experience. When used systematically, it also helps him create consistency across various documents. Moreover, the program saves him the time that would be spent on manually inserting phrases or words he uses frequently in his research and writing.

Stijn Carpentier – Digitized source material and distant reading

Within the Negotiating Solidarity project, Stijn’s research aims to uncover and contextualize the wide variety of contacts between actors within Belgian civil society and the rapidly growing influx of foreign guest workers from the 1960s to the 1990s. Despite labeling himself as a hobbyist in the Digital Humanities realm, Stijn presented to us an inspirational workflow where he merges historical research with digital tools.  

Stijn’s journey into DH was triggered by his source material. For his research, he wanted to explore how guest workers in Belgium were communicating about their activities and their ideas through periodicals and other types of serial sources. As the term suggests, serial sources are published at regular intervals, resulting in an overwhelming volume of material that cannot always be read entirely during the timeframe of a PhD project. Consequently, Stijn sought an efficient method to comprehensively analyze this extensive array of sources without having to read them all in full.

The first step to achieve this goal was digitization. Stijn encountered both undigitized and poorly OCR’d digitized sources, prompting him to undertake the digitization process himself. However, digitization is time-consuming; hence, Stijn emphasizes the importance of collaboration with the archives or institutions housing the materials. They may offer assistance in digitizing the content or provide access to their scanning equipment and OCR software. Stijn stresses that while digitized sources offer many advantages such as searchability, it remains crucial to engage with the physical materials. Understanding the contextual nuances of their creation and preservation is imperative, rather than treating them merely as isolated PDF files.

Once he tackled the first hurdle of digitization, Stijn delved into distant reading, a text analysis method enabling insights into vast corpora without the need for exhaustive reading. To conduct this analysis, he used the software AntConc.

AntConc is a free, cross-platform tool for corpus analysis. There are also other tools with similar features such as VoyantHyberbase, and Sketch Engine.

Upon uploading his documents to AntConc, Stijn could perform basic word searches and proximity-based word analysis. The tool also enables tracking keyword mentions over time, which helps to get an overview of patterns and how they evolved. As a result, Stijn could efficiently extract core ideas from an extensive corpus, a task that would have been impossible for him to complete during his PhD if he were using close reading methods. Such tools not only extract information but also foster creativity in research, encouraging novel perspectives on the research material that might otherwise remain unexplored.

Stijn concluded by comparing Digital Humanities to a Swiss army knife: it is like a versatile tool that doesn’t necessarily need to be the focal point of your project but serves as a valuable instrument for exploring both your sources and your research domain. Beyond that, DH facilitates connections with peers. Belgium boasts a vibrant Digital Humanities community, offering ample opportunities for networking and learning from a diverse group of experts and enthusiasts.

If you want to get involved in the DH community in Belgium you can join the DH Virtual Discussion group for Early Career Researchers. The discussion group meets on a monthly basis via MS Teams. Each meeting features a presentation from a member of the Belgian DH community, a moment to share DH-related news, and a chance to network.

Tom Gheldof – A day in the (tool) life

Tom Gheldof is the CLARIAH-VL coordinator at the Faculty of Arts. Throughout the years, he was involved in several projects in the field of Digital Humanities such as the Trismegistos project at the Research Unit of Ancient History. Currently, he is a scientific researcher of the ‘CLARIAH-VL: Advancing the Open Humanities Service Infrastructure’ project that aims at developing and enhancing digital tools, practices, resources, and services for researchers in many fields of the humanities.

Tom provided an insider’s view of his typical day, shedding light on the various tools he employs:

  • Identification: to introduce himself, Tom showcased his ORCID iD, a persistent digital identifier that sets researchers apart regardless of name similarities. It serves as a central hub to which you can link all of your research output. Not only does it boost the visibility of your work, it also streamlines administrative tasks, as you only need to update one platform that you can then connect with your funder, publishers, etc.
  • Text recognition: given that Tom’s research relies on manuscripts, he has familiarized himself with automated text recognition. His primary tool for this is Transkribus, a platform that uses machine learning technology to automatically decipher handwritten and printed texts. Through a transcription editor, users within the Transkribus community transcribe historical documents, training the system to recognize diverse text forms – be it handwritten, typewritten, or printed – across various languages, predominantly European.
  • Annotation: Tom relies on Recogito for his research on place names. This online annotation tool offers a user-friendly interface for both texts and images. Recogito provides a personalized workspace to upload, collect, and organize diverse source materials such as texts, images, and tabular data. Moreover, it facilitates collaborative annotation and interpretation of these resources.
  • Coding: for coding tasks, Tom uses Visual Studio Code, a free coding editor compatible with multiple programming languages. To collaborate and access code with open licenses, he turns to GitHub, a repository where people share their code, fostering a collaborative coding environment.
  • Relational databases: Tom has a lot of expertise when it comes to building relational databases. A relational database allows you to represent complex datasets and the connections between and within different types of data. He uses the FileMaker environment, which has broad functionalities and permits export of the data to any other format.
Tom has given trainings about relational databases in general, and FileMaker in particular, in the past. An overview of existing training material can be found on the DH@rts website.

To familiarize yourself with these and similar tools and methods, Tom recommends exploring the tutorials that are available at The Programming Historian, a DH journal that offers novice-friendly, peer-reviewed instructional guides.

Through trial-and-error, the presenters have figured out their workflow, which can hopefully inspire you to tailor your personalized data management processes. However, they all emphasized that the best research workflow is the one that works for you. For further inspiration when it comes to DH and research data, consider joining DH Benelux 2024, hosted by KU Leuven. This year’s conference, with the theme “Breaking Silos, Connecting Data: Advancing Integration and Collaboration in Digital Humanities”, is sure to bring much more inspiration when it comes to organizing, manipulating, and sharing research data.

Training: How Do You Do (It)? A behind-the-scenes look at research workflows (KU Leuven)

2023年10月13日 17:17

This event is only open to KU Leuven researchers and staff.

The Artes Research team from KU Leuven Libraries Artes and the ABAP council will kick off the new academic year with a special “How Do You Do (It)?” (HDYDI) session dedicated to research data workflows. This special session will coincide with the start of the Digital Scholarship Module taught by the Artes Research team. It will take place on Tuesday 7 November, 13h30-15h30, in the Justus Lipsiuszaal (Erasmushuis, Leuven). Everyone is welcome to attend, you do not need to register!

Program

13h30-14h

To help you through the afternoon slump, we will start with coffee and cookies which will be served in the main entrance hall of the Erasmushuis.

14h10-15h30

We will then move up to the 8th floor (Justus Lipsiuszaal) to start the session which will feature talks from researchers at the Faculty of Arts who outline their research workflows: how do they approach their research, what tools do they use, with what kind of data are they working, etc. We will get a behind-the-scenes look from:

There will be lots of time for questions and getting to know each other’s workflows.

The event will take place in Leuven, but if you would like to join online you can let us know at artesresearch@kuleuven.be and we will provide you with the link.

Keep an eye out for the next HDYDI event that will take place in Spring!

Practical details

  • When: Tuesday 7 November, from 13h30 to 15h30
  • Where: coffee in main entrance hall and session in Justus Lipsiuszaal (Erasmushuis, Leuven) with online option: if you would like to join online you can let us know at artesresearch@kuleuven.be and we will provide you with the link
  • Price: free
  • Registration: no registration required

Recap: How do you do it? A behind-the-scenes look at research workflows

2022年12月8日 16:14

To kick-off the Digital Scholarship Module, a training for first-year PhD researchers at the Faculty of Arts, we, at Artes Research, hosted a training session dedicated to research data workflows. Three researchers from the Faculty of Arts offered a behind-the-scenes look at their research workflows by outlining how they approach and structure their research, the tools they use, and with what kind of data they are working. The goal of the session was to provide examples of more advanced workflows for the first-year PhD researchers as they embark on their research journey.

Elisa Nelissen: applying digital tools throughout the entire research workflow

Elisa is a PhD researcher under the supervision of Jack McMartin, working on the interdisciplinary project “The Circulation of Science News in the Coronavirus Era” in collaboration with the KU Leuven Institute for Media Studies. Her research focuses primarily on how science news about COVID-19 vaccines travels to and from Flanders, and the inter- and intralingual translations it is subject to.

Elisa started off the session by introducing us to the tools she applies during various steps of her research workflow, leaving us with plenty of food for thought.

Literature collection

For collecting all the literature that holds potential relevance for her research, Elisa uses Zotero as it has some very interesting features such as full text searches (which makes it easy to look up specific concepts), highlighting and color coding interesting sections or terms, etc.

Go check out our blog posts about Zotero if you are interested in learning more!

Reading literature and tracking progress

After gathering the relevant literature, reading all the collected material naturally follows. Here, Elisa had a very useful tip for those that, just like herself, easily lose focus when reading a text: why not try turning text into audio files? This helps Elisa to follow the text more closely and take notes while listening. She also keeps close track of her reading progress by using the productivity application Notion. Apart from creating reading lists, Notion also helps her to keep an overview of her project’s progress, upcoming tasks, etc.

Data collection

Also for collecting her data, Elisa had to make herself acquainted with new digital tools. A first important piece of data for her research are news articles. As Elisa did not yet know how to code, she followed some online courses on Python to learn the basic skills needed. Thanks to this, she can now scrape websites for metadata of news articles. Another important element in her data collection is conducting interviews, where she finds it very important that you invest in proper recording systems and equipment to guarantee the usability of the material.

If you are a KU Leuven student or staff member, you can borrow audiovisual equipment from the lending service of LIMEL for free! Check out all the details here.

Next to interviews, she conducts surveys with Qualtrics. The best advice here is to test your surveys thoroughly. Once sent out, you cannot change the survey questions anymore, so you have to be sure that the chosen questions will deliver the needed results.

KU Leuven researchers can purchase a Qualtrics license through ICTS, more information can be found here.

Data analysis

First, in order to correctly organize and analyze all the collected data from news articles, Elisa felt the need to build a relational database in FileMaker. It helps her to organize her data, compare texts, and keep track of her overall workflow.

If you are interested in knowing more about FileMaker, check out this training session given by Tom Gheldof in the context of the DH training series organized by the Faculty of Arts.

Secondly, for transcribing the conducted interviews, she uses sonix, which is an automated transcription service. It offers good quality transcriptions that you can edit yourself afterwards. Elisa stresses the importance of anonymizing your interviews before sending them in, to make sure you do not unwillingly share any personal data! Lastly, for coding the interviews she uses NVivo.

KU Leuven researchers can purchase an NVivo license through ICTS, more information can be found here.

To conclude her talk, Elisa left us with a useful tip: it might be interesting to try out a different browser (in her case Sigma) as this might give you new perspectives about how to structure and manage your daily work.

Sara Cosemans: using digital research methods to deal with information overload

Sara is a Doctor Assistant in Cultural History at KU Leuven and a part-time Assistant Professor in the School of Social Sciences at UHasselt. The digital method discussed below was developed during her PhD at KU Leuven, together with data scientists Philip Grant, Ratan Sebastian, and computational linguist Marc Allassonnière-Tang. Learn more about her digital approach in this blog post.

Sara Cosemans

Sara’s presentation was based on her PhD project entitled “The Internationalization of the Refugee Problem. Resettlement from the Global South during the 1970s”, which initially started off as a very analogue project. However, when facing some serious challenges, Sara started to explore digital methods. Her journey was one of trial and error, with a lot of investment in, on the one hand, educating herself in how to use digital tools, and, on the other hand, building a network of digital experts to collaborate with.

Sara’s project required a lot of archival visits in various countries. When going to the archives, she did not yet know what she was looking for exactly, making it necessary to scan every piece of information that held potential relevance. An analysis of the content would have to wait. However, when finalizing her archival visits, she ended up with an unimaginably large corpus of about 100,000 pages. She quickly realized that she would never be able to read everything and needed to come up with a digital solution.

To photograph the archival documents, Sara used her iPad as this had a big enough storage capacity and rendered high quality pictures. By using ABBYY FineReader she could subsequently apply Optical Character Recognition (OCR), which converted these photographs into fully text-searchable documents.

We recently organized a two-day workshop devoted to OCR, you can download the slides on this webpage that collects information and resources about the DH trainings offered by the Faculty of Arts.

The next question, however, was how to search through all these files. A first idea was to build a relational database in FileMaker, which would mean entering all the metadata of the files coming from different institutions into the database, with the ultimate goal of making relations between those files. Unfortunately, entering the metadata was so time-consuming that it could only be completed for one institution. Therefore, she needed to come up with another solution. Since all her photographs were now searchable documents, a first quick way to find information that she was already expecting to find was simply using the CRTL + F function. But how can you find what you don’t already know? Here, natural language processing (NLP) proved to be the solution.

Since Sara did not have the time to learn natural language processing methods like topic modelling and clustering herself, she invested her energy in networking at DH conferences, which led to finding researchers who were very eager to work with her data. They developed a Google Colaboratory notebook in Python to do topic modelling on all files, determine topics, and make visualizations. They then created reading lists about the most important documents so that Sara could start with reading those files. This close reading made it possible for Sara to find new topics, which she could then explore further in other documents by using her CRTL + F method.

Sara concluded by saying that while she needed digital methods to make her research manageable and to help her find relevant connections, the analysis of the material still depended completely on her. The computer will never fully replace the close-reading, deep-thinking historian.

Marianna Montes: reproducibility and versioning as two important keys to a successful coding project

Mariana’s main research interests lie in corpus linguistics and cognitive semantics. The goal of her PhD project is methodological triangulation of distributional methods (namely, comparing vector space semantics, behavioral profiles and traditional lexicographical analysis), with case studies in English and Dutch. Some of the tools developed and used within the project can be found on her personal webpage. She recently also started working at ICTS, where she supports research data management.

Marianna’s interest in digital methods and tools was spiked when studying languages for which she needed to acquaint herself with statistics and programming. During her talk she therefore stressed the importance of challenging yourself to learn new skills and to use new digital tools. Over the past years, she has actively helped fellow researchers in their process of trying out new methods to achieve greater efficiency in their work.

Her main expertise is in R. She showed us how R can be used in multiple ways throughout your research: creating plots, making interactive reports, presenting slides, coding workflows, and so forth. On her blog, Marianna wrote an interesting piece about how you can implement R-project tools in your workflow.

Marianna also underlined how your work should be reproducible for both yourself and other people. During her research, Marianna experimented a lot with running various codes, trying out different clustering algorithms, etc. She ended up forgetting how she reached her results, making it necessary to double- or triple-check everything. Therefore, she started to carefully register all steps in her workflow in order to put into words the reasoning behind her coding. This way, she could answer questions like “What decisions did I make, and why?”. Marianna has written more extensively about how your old, current, and future self might not understand your decisions in this insightful blog post.

In the same vein, Marianna highlighted how versioning can be a true life-saver. For this, she uses Git. Git allows you to control versions, keep track of the differences between files, retract files that were removed, and make a screenshot of the state of your files at a given time. This way, you create an online backup, that you can also share with other people.

KU Leuven hosts its own GitLab, you can find more detailed information here.

To conclude with an important message that was shared throughout all the presentations: doing a PhD, despite popular belief, should not be done in isolation. Instead, you should look for potential ways to connect with other researchers. A willingness to make the process of developing the dissertation visible can only help to improve the project and stimulate collaborations, which might lead to solving the problems you are facing or opening up new research avenues and generating new perspectives.

Training: How do you do it? A behind-the-scenes look at research workflows (KU Leuven)

2022年10月18日 15:11

This event is only open to KU Leuven researchers and staff.

The Artes Research team and the ABAP council will kick off the new academic year with a special “How do you do (it)?” (HDYDI) session dedicated to research data workflows. It will take place on Thursday 10 November, 15h30-17h30, in the Justus Lipsiuszaal (Erasmushuis, Leuven). 

Program

15h30-16h

To help you through the afternoon slump, we will start with coffee and cookies which will be served in the central hallway on the 7th floor of the Erasmushuis.

16h-17h30

We will then move up to the 8th floor (Justus Lipsiuszaal) to start the session which will feature talks from researchers at the Faculty of Arts who outline their research workflows: how do they approach their research, what tools do they use, with what kind of data are they working, etc. We will get a behind-the-scenes look from:

  • Sara Cosemans (History, Cultural History since 1750)
  • Mariana Montes (Linguistics, Quantitative Lexicology and Variational Linguistics)
  • Elisa Nelissen (Translation Studies, Translation and Intercultural Transfer)

There will be lots of time for questions and getting to know each other’s workflows.

Practical details

  • When: Thursday 10 November, from 15h30 to 17h30
  • Where: coffee in central hallway 7th floor and session in Justus Lipsiuszaal (Erasmushuis, Leuven) with online option: if you would like to join online you can let us know at artesresearch@kuleuven.be and we will provide you with the link
  • Price: free
  • Registration: no registration required
❌