普通视图

Received before yesterday

ediarum.MEETUP – nächste virtuelle Veranstaltung am 24.11.2025

2025年11月14日 01:14

Liebe ediarum-Community, liebe ediarum-Interessierte, liebe Kolleg:innen!

im Namen des Konsortiums Text+ der Nationalen Forschungsdateninfrastruktur (NFDI) und des ediarum-Teams an der Berlin-Brandenburgischen Akademie der Wissenschaften (BBAW) laden wir Sie herzlich zum nächsten virtuellen ediarum.MEETUP ein:

 am Montag, den 24. November 2025, 11:00 Uhr s.t.

Marcus Lampert aus der TELOTA-Abteilung der BBAW stellt ein neues ediarum-Modul vor: ediarum.WEBDAV. Vor einem Jahr an der BBAW eingeführt, verfolgt ediarum.WEBDAV das Ziel, ein sicheres und transparentes System für das Bearbeiten, Speichern und Sichern von XML-Forschungsdaten bereitzustellen. Mittlerweile nutzen bereits fast zehn Projekte an der BBAW die Software täglich.

Marcus wird die Software aus verschiedenen Blickwinkeln vorstellen: Zunächst demonstriert er, wie Nutzerinnen und Nutzer über Oxygen und die Benutzeroberfläche mit dem System arbeiten. Anschließend zeigt er, wie ediarum.WEBDAV automatische Git-Commits verwendet, um XML-Forschungsdaten zuverlässig zu speichern und zu sichern. Schließlich werden wir gemeinsam einen Blick auf Teile des Codes werfen, um zu verstehen, wie das Laravel-Framework die verschiedenen Komponenten der Software koordiniert.

Die Veranstaltung findet virtuell statt; eine Anmeldung ist nicht notwendig. Zum Termin ist der virtuelle Konferenzraum über den Link https://meet.gwdg.de/b/nad-mge-0rq-ufp erreichbar.

***

Weitere Informationen zum Meetup finden Sie auf der ediarum-Website (https://www.ediarum.org/meetups.html).

Das ediarum.MEETUP ist primär für DH-Entwickler:innen gedacht, die sich zu spezifischen ediarum-Entwicklungsfragen austauschen wollen, jedoch sind auch ediarum-Nutzer:innen und Interessierte herzlich willkommen.

Wir freuen uns auf zahlreiches Erscheinen!

Viele Grüße
Nadine Arndt im Namen der ediarum-Koordination

The Imperfection of AI Detection Tools

2025年10月10日 04:30
magnifying glass with focus on glass

As AI-generated content continues to grow and develop, so too are certain issues to consider. For academic workers, one of the key challenges is discerning AI-generated content from human-generated content....

American Religious Ecologies Team Completes Digitization

2025年8月29日 20:30
American Religious Ecologies seeks to understand how congregations from different religious traditions related to one another by creating new datasets, maps, and visualizations for the history of American religion. After years of photographing, editing, cataloging, and uploading schedules to the American Religious Ecologies website, we are excited to announce that we have uploaded the last […]

Internship: Developing Digital Humanities Resources for the DH@rts Platform

2025年8月26日 18:07

Each year the Artes Research team offers the opportunity for students to do an internship with our team. During spring 2025, Helin Toprak, a student in the Advanced Master in Digital Humanities, joined us.

The Artes Research team frequently (co-)organizes training opportunities and collects training resources for researchers at the Faculty of Arts. Helin’s internship focused on this aspect of our work. During her three months with us, she developed resources on a variety of tools we find useful for our researchers. Helin created tutorials showcasing the functionalities of OpenRefine, and two Knight Lab tools, Timeline JS, and StoryMap JS.

OpenRefine is a tool that is useful for nearly all researchers who work with structured data and computational methods. An aspect of the research data workflow that is crucial in the beginning stages is data cleaning and transformation. OpenRefine is a free open-source web-based tool that allows users to do just that. During her internship, Helin created a tutorial to help researchers get started with this tool. The tutorial is designed for users who have no experience with OpenRefine and are looking to learn about the features and explore its uses.

The other two resources that Helin created focus on two tools from the Knight Lab suite. Timeline JS is an open-source tool developed to help users create interactive timelines. This is an accessible tool that anyone can use. The web-based tool just requires data that users put into a Google spreadsheet, then it’s ready to go with multiple options for customization. Advanced features allow those with more expertise to use their JSON skills to further customize their output.

StoryMap JS is also a free web-based tool developed by Knight Lab. This tool is designed to be highly visual. Users can add images and text to maps, allowing them to create a story or illustrate certain events or situations that might be relevant to their research topics. This tool is equally as accessible as Timeline JS and can be customized to fit a researcher’s needs and style.

To learn about these two tools as well as OpenRefine, you can have a look at the resources that Helin created during her internship. They are accessible via the following Zenodo record (make sure to look through all the documents in the record for each separate resource):

We would like to thank Helin for her great work during her internship! She was a pleasure to have as an intern, and we wish her all the best in her career after graduating from the Advanced Master in Digital Humanities!

Graduate Student Reflections: Sustainability Summer

2025年8月26日 03:40
This past summer I had the opportunity to work on RRCHNM’s sustainability team. Our work focused on flattening websites built with content management systems (CMS), such as Drupal, Omeka, and WordPress. Flattening refers to the process of simplifying dynamic, database-backed websites to static versions built with only HTML, CSS, and JavaScript. This minimizes server space […]

Departmental Website Pilot Program Applicants Wanted

2025年7月21日 23:00
closeup of mithril chainmail armor

The Humanities Technology team is excited to announce the upcoming release of our new departmental website framework, Mithril (Modern Intuitive Technology Resource Interaction Library). Mithril is a next-generation WordPress site...

ediarum.MEETUP – nächste virtuelle Veranstaltung am 14.07.2025

2025年6月25日 02:25

Liebe ediarum-Community, liebe ediarum-Interessierte, liebe Kolleg:innen!

im Namen des Konsortiums Text+ der Nationalen Forschungsdateninfrastruktur (NFDI) und des ediarum-Teams an der Berlin-Brandenburgischen Akademie der Wissenschaften (BBAW) sowie in Kooperation mit der Gender & Data-Arbeitsgruppe der BBAW laden wir Sie herzlich zum nächsten virtuellen ediarum.MEETUP ein:

 am Montag, den 14. Juli 2025, 11:00 Uhr s.t.

Zum Thema Encoding Gender kündigen wir folgende Beiträge an:

Themenblock Kodierung

  • Nadine Arndt (BBAW/TELOTA): Auszeichnung von „sex“ & „gender“ in ediarum
  • Marius Hug und Frank Wiegand (BBAW/Text+): Bevorzugte Waffen der Frauen – Annotationen im Deutschen Textarchiv als Voraussetzung für eine genderspezifische Korpusanalyse mit dem DWDS

Themenblock Normdaten

  • Sabine von Mering (Museum für Naturkunde Berlin): Das Potenzial von Wikidata für die Sichtbarmachung von Frauen – Gender data gap in der Naturkunde
  • Julian Jarosch, Denise Jurst-Görlach und Thomas Kollatz (Akademie der Wissenschaften und der Literatur Mainz): Genderattribution in der GND und entityXML am Beispiel der Korrespondenz Martin Bubers

Das Meetup soll den Austausch fördern, Problemfelder identifizieren und gemeinsam Lösungsansätze erarbeiten. Wir freuen uns auf vielseitige Perspektiven und eine lebhafte Diskussion!

Die Veranstaltung findet virtuell statt; eine Anmeldung ist nicht notwendig. Zum Termin ist der virtuelle Konferenzraum über den Link https://meet.gwdg.de/b/nad-mge-0rq-ufp erreichbar.

***

Weitere Informationen zum Meetup finden Sie auf der ediarum-Website (https://www.ediarum.org/meetups.html).

Das ediarum.MEETUP ist primär für DH-Entwickler:innen gedacht, die sich zu spezifischen ediarum-Entwicklungsfragen austauschen wollen, jedoch sind auch ediarum-Nutzer:innen und Interessierte herzlich willkommen.

Wir freuen uns auf zahlreiches Erscheinen!

Viele Grüße
Nadine Arndt und Frederike Neuber
im Namen der ediarum-Koordination und der Gender & Data-Arbeitsgruppe

Report from the Seventh Conference on Digital Humanities and Digital History

2025年5月13日 21:15
From March 19th to March 21st, 2025, the German Historical Institute (GHI) in Washington, DC hosted the Seventh Conference on Digital Humanities and Digital History. The conference theme, real-time history, drew on Roy Rosenzweig’s call to action that historians need to directly address the methodological potential and risks of the digital age. Designed as a […]

Save-the-Date (14.07.2025) und Call for Contributions: Virtuelles ediarum.Meetup zu „Encoding Gender“

2025年4月7日 15:41

Im Namen des Konsortiums Text+ der Nationalen Forschungsdateninfrastruktur (NFDI) und des ediarum-Teams an der Berlin-Brandenburgischen Akademie der Wissenschaften (BBAW) sowie in Kooperation mit der Gender & Data-Arbeitsgruppe der BBAW freuen wir uns, das nächste virtuelle ediarum.Meetup anzukündigen:

Datum: 14. Juli 2025*
Ort: virtuell
Zeit: 11:00 – 12:30 Uhr
Thema: Encoding Gender

Neben einer Einführung in die Thematik und einer Vorstellung der ediarum-Funktion zur Kodierung von Sex und Gender laden wir Projekte, die sich mit der Kodierung von Gender beschäftigen, ein, kurze Beiträge (ca. 5–10 Minuten) einzureichen. Das Meetup soll den Austausch zu diesem Thema fördern, Problemfelder identifizieren und gemeinsam Lösungsansätze erarbeiten. Ob Herausforderungen bei der Modellierung oder konkrete Lösungsansätze in TEI/XML – wir freuen uns auf vielseitige Perspektiven und eine lebhafte Diskussion!

Wenn Sie einen 5- bis 10-minütigen Impulsbeitrag zum Thema „Encoding Gender“ leisten möchten, senden Sie bitte eine kurze, informelle Beschreibung Ihres Beitrags bis zum 15. Mai 2025 an neuber@bbaw.de.

Viele Grüße,

Nadine Arndt und Frederike Neuber
im Namen der ediarum-Koordination und der Gender & Data-Arbeitsgruppe

* Aus organisatorischen Gründen weichen wir diesmal leicht vom angestammten Rhythmus ab.

OpenAlex: The open catalog to the global research system

2025年4月1日 22:44

OpenAlex is a database of academic authors, institutions and publications. Since its launch in January 2022, OpenAlex has received a lot of attention as an alternative to commercial research databases such as Web of Science or Scopus that would better meet academic needs and values. OpenAlex is based on a multitude of sources across all fields of science and languages, and on a global scale. A user can search by author, institution and research output, and select specifically by type of output (article, book, dataset, preprint, editorial, etc.), citations, publication date or availability in Open Access. The starting point for OpenAlex was the dataset of the discontinued Microsoft Academic Graph (which was the second largest academic search engine after Google Scholar), which was enriched and refined – a process that is still ongoing – to be able to be used as an alternative to commercial research databases for all kinds of searches and/or bibliometric analyses.

The OpenAlex data – which is shared under an open licence, namely Creative Commons Zero (CC0) – is available in three ways: via an online user interface (i.e. ‘OpenAlex Web’), via data snapshots (which enable you to save a copy of the OpenAlex database locally – as is at the time of download) and via the OpenAlex API. Use of OpenAlex Web, the data snapshots and the OpenAlex API is free of charge. There is a paid service which accomodates intensive use and offers additional support, but the free version suffices for the typical individual user.

The (lack of) cost, as well as the open philosophy behind it, is something that sets OpenAlex aside from commercial products like Web of Science and Scopus. These are expensive products and a recent study even shows that the companies behind these use specific sales strategies that maximise profits but come at the expense of the academic community. What is more, OpenAlex is lauded for its completeness and inclusivity. Web of Science and Scopus are selective databases, based on a curated set of sources (which has been criticized in the past for being too focused on particular disciplines, as well as specific languages, regions and publication types); whereas OpenAlex tries to be as complete as possible and is therefore not only more representative for disciplines like humanities, but also for the state of research in various languages on a global scale.

Quite a large number of studies analysing the quality and (dis)advantages of OpenAlex have been produced recently. The status quaestionis is:

  • If one wants to get as complete a picture as possible of the research output of an author or of an institution as a whole (all scientific disciplines, all languages, all publication types), it is advisable to use OpenAlex.
  • If one wants to map the OA availability of research output, it is advisable to use OpenAlex.
  • For specific bibliometric analyses, it may be advisable to use Web of Science or Scopus due to the selectivity of the database and the (for the time being at least) relative superiority of the metadata, provided that one is aware of the limitations (e.g. in terms of scientific discipline, publication type and language).
  • When compiling systematic reviews, it depends on the exact objective. If one wants to map scholarly literature on a particular topic as completely as possible, it is advisable to use OpenAlex; if, on the other hand, one wants to obtain a selection of scholarly literature that is representative of mainstream researchin Western Europe and North America, it is advisable to use Web of Science or Scopus for certain scientific disciplines (for other disciplines, no database is suitable for this purpose).

Celebrating Women’s History Month

作者RRCHNM
2025年3月7日 03:27
Since RRCHNM’s founding in the 1990s, we have been committed to highlighting the contributions women made in the past. One of our first projects was a CD-ROM version of the textbook Who Built America? which grew out of efforts to reinterpret American history from “the bottom up”—drawing on studies of workers, women, consumers, farmers, African […]

Recap: How do you do it? A behind-the-scenes look at research workflows (2024)

2024年12月12日 00:02

Every academic year, the HDYDI (How Do You Do It?) event on research data workflows signals the start of the Digital Scholarship Module. Through a series of sessions and (mini-)workshops, Artes Research aims to guide students through the complexities of scholarship in the digital age, from Open Science to Research Data Management and beyond. At the HDYDI kick-off event, three researchers from the Faculty of Arts lift the curtain on their own research workflow and offer a behind-the-scenes look at the ways in which they approach their research, the data they engage with, and the tools they use in doing so. The goal of this session is to provide examples of more advanced workflows for the first-year PhD researchers as they embark on their own research journey. Hopefully this recap of the session can spark some inspiration for you!


Seb Verlinden – Using Obsidian as a note-taking tool for literature

The first speaker, Seb Verlinden, is a second-year PhD candidate in medieval history. Under the supervision of Maïka De Keyzer and Bart Vanmontfort, Seb is studying the long-term landscape changes – mainly in the form of gradual desertification – that characterize the Campine region, one of the driest areas in Belgium. Particular focus is on the impact of eighteenth-century drainage in the region.

Seb’s talk concerns an issue that all researchers can relate to, regardless of the relative complexity of their project – that of taking notes. It is true, as Seb highlights, that every researcher has their own unique workflow, often relying on a combination of tools that makes sense for them (in his case, QGIS, FileMaker Pro, MAXQDA, and spreadsheet software). But at the heart of any research process is the need to organize one’s thoughts, and this is where note-taking apps can make a real difference. So, what are some of the options out there?

Zotero is a possible solution – one we’ve already discussed elsewhere on this blog. As a reference manager first and foremost, Zotero has the potential to become a researcher’s living library, a knowledge base covering all relevant literature. It also has great capabilities for annotating PDFs, especially with its new 7.0 update. What you’re missing in the context of note-taking, however, is the big picture. Seb aptly points out that using Zotero to make notes is like putting post-its in books: you have no real overarching structure, and no way to easily link notes across books.

Other tools are likewise flawed. Lots of researchers use Microsoft Word to take notes, even though it is primarily tailored to mid-length longform text. As a result, it is easy to lose track of notes, unless you’re willing to navigate multiple files; and it tends to grow slow and cumbersome, since it is occupied with layout. It is, simply put, unintuitive for this purpose.

This is why Seb puts forward another solution, one that he believes to be faster, better automated, and easier to use: Obsidian. A widely supported and free tool, Obsidian does have its advantages: in contrast to both Microsoft Word and Zotero, it uses open-source file formats (.md or Markdown files, written in an accessible markup language) and it is full-text searchable and provides a structured overview of notes. Moreover, it offers a versatile workspace, allowing you to go as simple or as complex as you like – especially with the addition of supported plugins. One such plugin, in fact, allows your Obsidian environment to easily interoperate with your Zotero library (including references, bibliographies, and PDF annotations), which is particularly useful.

Seb ends his talk by highlighting another key benefit in using Obsidian. By introducing links in your notes, it is possible to cross-reference other notes within your system with minimal user effort; and through the use of tags, you can generate another layer of structure. Obsidian then uses this information to visualize the relations between your different notes, automatically creating a network of clusters that correspond to certain topics of interest. This way, it expands the possibilities of the data without the need for the researcher to make any real effort – a great reason to think about using Obsidian for your own note-taking needs!

Seb showcased his own network of notes, automatically clustered by Obsidian. This way, he can visually grasp the connections between different topics of interest!

Laura Soffiantini Managing linguistic and historical data. A PhD workflow using FileMaker

Laura Soffiantini is the second speaker: as a PhD researcher at the Cultural Studies Research Group, she is currently analyzing the geographical representation of Greece in Pliny the Elder’s Naturalis Historia. With the help of her supervisor Margherita Fantoli, Laura intends to shed new light on the way in which Greece was perceived in Flavian-era Rome. In order to do so, she has to manage a varied mix of linked data – textual, linguistic, and historical – as part of her daily routine.

Grappling with 37 books of a classical encyclopedia, and dealing with data in different formats and with different qualities (actual text, numeric coordinates, symbols, etc.), Laura realized the importance of proper Research Data Management. It enables aggregating, manipulating, analyzing, and comparing your data more efficiently throughout – and even beyond – the research process. Indeed, a challenge faced by many researchers is the retrieval of data collected or processed at an earlier time, with the aim of relating it to “new” data. In this context, Laura provides a look at her own research workflow.

The primary strategy in managing your data, she remarks, is to structure it. By adding structure to your data, you can parse it more easily and return to it without issues, even in later phases of your project. Software like Obsidian is indispensable for this purpose, but it’s also good to think about using tabular formats like .csv (an open plain text format) as a way to organize your data. A useful tool put forward here is pandas, a Python library designed to help manage and analyze data derived from such .csv files. That might sound technical, but Laura ensures us that – even if you have no background in programming – pandas is a very accessible and convenient tool in handling tabular files.

Having thought about what data she worked with (an essential step for every researcher), Laura adopted an initial workflow in three parts. She first started out with .json files containing Pliny’s text, which she converted into tabular .csv files, adding data related to the lemmatization of the corpus, part-of-speech tagging, and references to book and chapter positions. Subsequently, she thought about grouping this data into different categories, which she assigned to different columns – such that there is a column titled “book_chapter”, one titled “lemma”, and so on. Finally, Laura assigned identifiers to the information contained in these files; she explains she wasn’t aware of the importance of such identifiers at the start of the project, but now realizes they form a crucial part of keeping tabular data.

As a result, Laura ended up with multiple .csv files, which she then related to each other using FileMaker (with the expert assistance of Mark Depauw and Tom Gheldof). One table, for instance, contains a list of all the Latin words used (the tokens, e.g. urbs) alongside their identifier, book number, lemma, and possible identifier linked to the Trismegistos database of ancient texts. Another contains the lemma along with its part-of-speech tag (e.g. proper noun) and meaning (e.g. “city”). By linking the different files through the use of identifiers – the keys to the data – Laura made a relational database easily managed and organized through FileMaker. The resulting dataset is at the core of her research project.

The main takeaway Laura wants to leave us with is that it is important to create an environment in which you can efficiently collect, store, manipulate, and analyze your data. This should not come at the cost of traditional approaches and methodologies – in fact, you can add to them to create a better workflow as a whole!

Laura showed us some examples of how she used specific identifiers to connect tabular files and create a relational database in FileMaker.

Zakaria El Houbba Obsidian as part of the research workflow

The third and final speaker is Zakaria El Houbba, third-year PhD candidate in Arabic Studies. Zakaria’s project, supervised by Arjan Post, focuses on the pre-modern relation between Islamic jurisprudence and Sufism, and in particular on the way in which these two strands are united in the figure of Aḥmad Zarrūq. In doing so, the research aims to come to a theory of applied legal epistemology in Zarrūq’s Sufism.

By discussing his own workflow in detail, Zakaria intends to highlight a number of key takeaways revolving around the idea of the “second brain”. Because we are so deeply involved with knowledge gathering on a daily basis, and constantly receive input from various sources (whether academic or not), we run the risk of being overwhelmed by a flood of information. When you use software to carry that burden for you, you can save your own brainpower for actual critical thinking rather than secondary tasks like categorizing information. This way, you’re effectively constructing what’s referred to as a second brain.

In this context, Zakaria also makes use of Obsidian, though he approaches it from a very different angle than Seb. Zakaria doesn’t actually enter all of his notes into Obsidian – he first uses an app like Microsoft OneNote as a “vault” to record random, non-processed thoughts, which he periodically goes through to think about how they fit in his project. He then sorts these thoughts and puts them in corresponding folders (relating to certain projects, classes, issues, etc.) in order to process them properly in Obsidian. Zakaria emphasizes that it’s fine to keep it simple and take it slow, focusing on what you specifically need from the note-taking environment so as not to get overwhelmed by all the options and information.

There are more tools Zakaria uses in his workflow – in fact, he says, there is a constant conversation between himself, Obsidian, Zotero, and ChatGPT. He uses Zotero to make notes and highlight text when reading articles, which he imports into Obsidian and categorizes using tags. Afterwards, he copies those highlights from Obsidian into ChatGPT, asking it to take up the role of copy editor and summarize the text. The resulting summary, which he critically revises, is then given a place in Obsidian once again.

Next to the powerful visualization capabilities discussed by Seb, Zakaria explains that Obsidian can also be used to create subpages within notes to explain terms and concepts, provide brief biographies of important figures, and so on. These “subnotes” can be linked back to in other notes as well, resulting in a kind of personalized Wikipedia for your research topic. This can also be helpful when you’re following classes on a certain topic or revising your own teaching material!

Finally, speaking of teaching material, Zakaria points us to a couple of helpful AI tools that can be used to process video files, such as recorded lectures or talks – whether you attended them or gave them yourself. One such tool is NoteGPT, which essentially functions as a transcriber and summarizer of recordings. You can revise and copy the resulting transcriptions and summaries into Obsidian as well, further expanding the scope of your second brain. Brisk Teaching serves a similar purpose as NoteGPT, but can also be used to turn a video into a PowerPoint presentation, which can be very convenient and time-saving. By thus constructing a workflow, gradually accumulating relevant information through different tools, it becomes much easier to manage your research.

The home tab of Zakaria’s Obsidian environment. As both he and Seb explained, you can make it as simple or complex as you like – try to make it a welcoming space for your daily research workflow!

The workflows of the presenters reveal both similarities and differences, but there’s one thing all three can agree on – what’s important is to find a workflow that works for you. To that end, take inspiration from some of the tools and processes described here, but always make sure they support your specific research methods. This was emphasized in the questions as well: don’t feel pressured to adopt a tool like Obsidian, but try it out and see if it accommodates your needs. Who knows, you might uncover a more efficient workflow or see your data from a new perspective.

Happy holidays from the Artes Research team, and may your data be blessed in the year to come! 🎄

Teaching, Writing, and Research with AI

2025年2月13日 23:29
When Chat GPT first appeared in November 2022, the almost universal reaction in the humanities community could be summed up in one word – Yikes! Almost without warning this new tool seemed ready to make it incredibly easy for students to “write” essays using prompts that took no more than a minute to produce and […]

Celebrating Black History Month

作者RRCHNM
2025年2月10日 23:50
Just a couple miles from RRCHNM is the campus of Woodson High School, part of the Fairfax County Public School system. Until this past year the school was named for W. T. Woodson, the long time superintendent of FCPS and an opponent of school desegregation. Now the school is named after Carter G. Woodson. Born […]

Humanidades digitales: ese fuego incomprendido

2025年2月4日 04:39

Por Jaime Ricardo Huesca

Soy docente en una institución universitaria. En mi experiencia, llevar las Humanidades Digitales a espacios educativos presenta cierto margen de complicación, y parto del siguiente hecho: se ignora cuál es el significado o qué implica esta disciplina. Cuando uno, con buena fe, pregunta a los estudiantes, el silencio eclipsa la sesión. 

Después del momento incómodo, normalizado porque esta situación se repite, corresponde tomar acciones para construir una posible definición de las Humanidades Digitales.

Continue reading Humanidades digitales: ese fuego incomprendido at Red de Humanidades Digitales.

Graduate Student Reflections: How Network Analysis Influenced My Research

2025年1月28日 01:00
As a fifth year PhD candidate in the History Department, I have combined my desire to learn everything I can about female preachers in the early American republic with my enthusiasm for any and all data visualizations and digital humanities tools. Committed to these women, just as they committed themselves to their itinerant ministries, I […]

RIDE 19 erschienen

2024年12月20日 19:08

Wir freuen uns, die 19. Ausgabe des Rezensionsjournals RIDE, das seit 2014 vom Institut für Dokumentologie und Editorik (IDE) herausgegeben wird, anzukündigen. Die aktuelle, von Roman Bleier und Stefan Dumont herausgegebene Ausgabe der Sparte „Tools and Environments“ enthält bisher zwei Rezensionen (eins auf Englisch, eins auf Deutsch):

Die Ausgabe 19 erscheint als „rolling release“, d.h. die Ausgabe ist noch nicht abgeschlossen und demächst erscheinen noch weitere Rezensionen.

Alle Rezensionen sind abrufbar unter https://ride.i-d-e.de/issues/issue-19.

Call for Contributions zum Workshop “Text+: Digitale Forschung auf der Grundlage von Text- und Sprachdaten bereichern” auf der DHd-Konferenz 2025 in Bielefeld

2024年12月17日 22:11

Im Rahmen der Konferenz Digital Humanities im deutschsprachigen Raum (DHd) an der Universität Bielefeld findet am 03. und 04. März 2025 der Workshop “Text+: Digitale Forschung auf der Grundlage von Text- und Sprachdaten bereichern” statt, in dem hands-on ein Blick in das Angebotsportfolio von Text+ geworfen wird und der in Zusammenarbeit mit der Community offene Bedarfe eruiert.

Die Organisator:innen des Workshops ermutigen Teilnehmende, im Vorfeld ihre Bedarfe an Text+ zu adressieren, die über die bestehenden Angebote von Text+ hinausgehen. Dies können neue Tools, Softwarepipelines, Angebote zur Datenablage, Handreichungen, Schulungsangebote u.v.m. sein. Dazu zählen auch Erweiterungen von bestehenden Angeboten um weitere Features und Möglichkeiten.

Erbeten werden Kurzabstracts einer Länge von max. 500 Wörtern, die ein Desideratum im Angebotsportfolio von Text+ darstellen, dessen Relevanz für die Forschung begründen und Perspektiven aufzeigen, wie der offene Bedarf bedient werden kann.

Fünf Einreichungen wird die Möglichkeit gegeben, ihren Bedarf im Workshop im Rahmen einer kurzen Präsentation (max. 10 min) zu präsentieren und im Plenum zu diskutieren. Alle Einreichungen sind eingeladen, sich mit einem Poster zu beteiligen, das ihre Bedarfe visuell begründet.

Abstracts werden bis zum 19. Februar 2025, 23:59 Uhr MEZ unter office@text-plus.org entgegengenommen.

❌