普通视图

Received before yesterday

Fighting for gender equality in the digital world

2022年9月5日 18:01

The aim of the three-year project ‘Égalité de genre et transformation numérique’ (‘genre/numérique’), which will hold its first conference in Fribourg on 8-9 September, is twofold: to ensure that movements toward gender equality are supported in the digital domain, and to address gender bias in the development and deployment of digital technologies.

Genre/numérique’ is jointly led by the UNIL-EPFL dhCenter, the UNIL and EPFL equality offices, HES-SO (Haute École spécialisée de Suisse occidentale), HEG-FR (School Of Management Fribourg), and the StrukturELLE association, with support from Swissuniversities. It was initiated based on the recognition that issues related to gender equality and the digital transformation must be addressed in relation to one another, rather than separately – as is often the case.

“Our project notably aims to challenge digital biases, such as those developed or reinforced by algorithms, and to support equal access to positions of power in digital enterprises,” explains project team member Héloïse Schibler. She and the genre/numérique team are working on a video series that dives into deeper detail on key issues – such as gender balance and digital education, gender roles on dating apps, and gender bias on Wikipedia – with selected experts.

In addition to opening a transdisciplinary dialogue on these issues, the team plans to evaluate the effects of gender bias and digital transformation, study approaches to promoting gender equality using digital methods, and outline best practices for  supporting women in the digital sector.

Toward gender equality in the digital world?

Genre/numérique is organizing a cycle of three annual conferences, the first of which will be held on September 8-9 in Fribourg. Entitled “Toward gender equality in the digital world?” this event is free and open to all upon registration, and is aimed in particular at participants from academia, digital enterprise, media, and public policy.  This year, a special thematic focus will be placed on issues in digital entrepreneurship, such as women’s participation in and access to tech start-ups (view full program).

“Although there is a great need for these kinds of conversations, to my knowledge, this event is unique in the region. We will have diverse speakers touching on all digital domains, with emphasis on active, hands-on exchanges such as round tables, working group discussions, and networking sessions,” Schibler says.

The proceedings of this and the succeeding two conferences, which will cover themes ranging from digital ethics and public policy to education and training, will be used to develop and publish a white paper of best practices, with the goal of reinforcing the visibility of issues relating to gender and the digital, and offering solutions for a more inclusive digital world.

The post Fighting for gender equality in the digital world appeared first on dhCenter.

An open discussion on the impacts of the digital

2022年6月14日 15:45

Should students learn to program before they can write? Will we still have newspapers in 50 years? What will museums look like in the future? The inaugural DHdays at EPFL aim to celebrate the diversity of digital humanities innovations, while exploring emerging research questions at the intersection of science and society.

Organized by the College of Humanities and UNIL-EPFL dhCenter in partnership with the Initiative for Media Innovation (IMI), the DHdays will be a unique opportunity for participants to learn about the latest work on how digital innovations the arts, humanities, and social sciences are impacting the world we live in – for better or for worse. This year’s event will address the three key themes of digital humanities (DH) & Media, DH & Education, and DH & Heritage.

“There is so much digital growth happening so fast, we often don’t know what our colleagues are doing, and we don’t yet have an overarching vision of the digital humanities community of practice,” says co-organizer Isaac Pante, dhCenter academic director for UNIL. “The DHdays is an open invitation to create synergies, not only in Lausanne but beyond.”

A place of experimentation

A senior lecturer in digital culture and digital publishing at the University of Lausanne (UNIL), Pante is also the UNIL academic director of the dhCenter. As a strong proponent of the empowerment of students in the social sciences and humanities (SSH) through the development of computational thinking skills, he hopes that the DHdays will bring the digital humanities into an open dialogue with the public via roundtables and a project forum, where visitors will be able to experiment with technologies used in DH research, like video games.

“When SSH students step into computational learning, by developing a video game for example, they have to learn a whole new language. But after studying computational models, there is a feedback effect that allows you to start thinking computationally,” he explains. “The beauty of the digital humanities is that you can then bring an SSH perspective to computational problems: for example, understanding the historical context of a video game’s source code.”

“The future is already here”

An international list of speakers from academia, media, education, and culture will also lead participants on an exploration of how digital tools and methods have transformed research questions and practices in the arts and humanities as well as the social, engineering, and computer sciences.

“For the DHdays, we have tried to bring together the best international experts and local actors,” says co-organizer Béla Kapossy, director of EPFL’s College of Humanities. “For example, the session on the future of the press will feature a dialogue with the former head of R&D of The New York Times, the former director of strategic initiatives of The Washington Post, the editor-in-chief of Le Temps, the head of interactive content of Tamedia, and several researchers working on the history of the press at UNIL and EPFL. IMI’s research projects and the best start-ups in the field will also be presented.”

On the subject of video games, the conference will also notably feature Lancaster University professor Sally Bushell, co-creator of the Minecraft-inspired literature game LitCraft, as well as French video game design pioneer and Adibou co-creator Muriel Tramis.

“These kinds of events are often confined to an expert audience, but we want to open up the discussion on the impacts of the digital, whether it’s video games, media recommendation algorithms, cultural heritage preservation, or digital citizenship,” Pante says.

He emphasizes that the DHdays is intended not as a showcase, but as a celebration of the diversity of DH research. Although it’s clear the two-day event will not cover all DH subjects, it will represent an important opportunity for researchers to make new connections.

“In the words of William Gibson, “the future is already here; it’s just not evenly distributed”. We DH researchers all have strengths and weaknesses that can inform one another. The goal is thus not for everyone to become an expert in everything, but to facilitate a dialogue and fruitful complementarities.”
DHdays – practical details

DHdays – practical details

This two-day bilingual event, to be held at the SwissTech Convention Center, is free and open to the public. Please register to attend in Lausanne, or through our online attendance option. Please visit the event website, dhdays.ch, for the full program and speaker list, and be sure to follow us on Instagram and/or Twitter to receive all the latest updates.

The post An open discussion on the impacts of the digital appeared first on dhCenter.

Report: Digital Criticism Unconference

2022年2月22日 23:39

The results of the Digital Criticism Unconference, co-organized by the dhCenter and held online in October 2021, are now available in a report drafted by dhCenter member Jessica Pidoux (EPFL/Science Po).

The Digital Criticism Unconference invited digital humanities researchers to engage in a dialogue on a variety of emerging issues with a common theme: the digitalization of culture and society. The aim was to create an open space where scholars could network; share insights and challenges relating to current research projects and methods; and discuss trends in the fields of digital studies and digital humanities.

The online event was organized with the collaboration of infoclio.ch; the Universities of Geneva, Basel, Bern, and Lausanne; and the Swiss Academy of Humanities and Social Sciences (SAGW/ASSH).

Read the report online, or download the PDF. Videos of the two keynotes and 14 panel reports are also available on the event website.

Read our recap of the event:

Online ‘unconference’ turns a critical eye on digital society

The post Report: Digital Criticism Unconference appeared first on dhCenter.

Unique corpus gives a voice to England’s laboring poor

2022年2月18日 00:06

Because the laboring poor in Late Modern England were rarely literate, their stories have been told almost exclusively by a privileged minority of educated authors and playwrights. A research project led by dhCenter member Anita Auer aims to change that. (Lead image : Petition letter, reused with permission of the Cumbria Archive Centre, Barrow-in-Furness (Ref: BPR10O52).

The LALP project (Language of the Labouring Poor in Late Modern England) is based on a corpus of some 2,000 letters, which were handwritten by English paupers during the mid-18th and early 19th centuries to petition for economic relief under the nation’s Old Poor Law.

Because compulsory elementary education was not established in England until 1880, the writers of these letters were often semi-literate. Therefore, the unique spelling and syntactical patterns and phonetic representations of speech found in the letters, curated by independent researcher Tony Fairman, reveal how their writers may have sounded.

According to LALP project leader Anita Auer, the digitization and standardization of these letters provides an unprecedented opportunity to listen, for the first time in language history, to the voices of a working class that have never before been heard – despite the fact that it comprised some 80% of the population in Late Modern England.

Facsimile of a petition letter, reused with permission of the Cumbria Archive Centre, Barrow-in-Furness (Ref: BPR10O52).

Transcription of the facsimile above, with examples of phonetic spelling (e.g. Dun ‘done’; Sum thing ‘something’), which indicate that the writer had a Northern English pronunciation © LALP/Anita Auer

Balancing digital and human approaches

Funded by the Swiss National Science Foundation, the LALP project runs from 2020-2024. In that time, the project members – experts in historical and corpus linguistics – aim to transcribe the letters, convert them to digital format, and organize them in a searchable database for the academic community.

Auer, a professor of English linguistics at the University of Lausanne (UNIL), says that creating such a database will require a combination of software and human problem-solving.

“The work has some elements of forensic linguistics, including matching handwriting and language use. One of the tools used by the project members is the VARD variance recognition software that helps us normalize spelling. But in other cases, we still rely on close reading skills. For example, ‘I am’ may be written ‘I ham’, as the letter-writer wishes to avoid the stigma of sounding lower class by dropping their hs. But a computer would still assume this means ‘ham’!”

The sheer amount of variation the letters contain is also a challenge: a given word may be spelled any number of ways. Moreover, those unable to write would often engage a friend or relative to compose the petition for them, adding another layer of complexity to demographic metadata such as age and gender.

Towards a normalized version of the petition letter using VARD software. The highlighted forms indicate recognized spelling variants. Interestingly, the word Right ‘write’ was not recognized, as Right exists in Standard English orthography. While the option of automatic normalization exists, manual checking, corrections, and training is necessary. © LALP/Anita Auer

A voice on the page

Despite the onset of the pandemic, Auer and her team have painstakingly analyzed and converted all the letters to plain text with relevant metadata. Going forward, she hopes that by using digital humanities and corpus linguistics tools, she and her team will be able to compare how the laboring poor spoke compared to the middling and elite members of society.

Another goal is to better understand how the introduction of grammar rules during this period created language ideologies that dictated which pronunciations were associated with sounding rich and educated, or poor and illiterate.

“When it comes to pronunciation, we can see for example that some letter-writers were aware that dropping their hs was ‘bad’, which is why they would sometimes re-insert them. But they did not appear to care about grammar rules such as double negation or preposition placement.”

Auer says that the letters also provide a unique opportunity to take a precise philological approach to this kind of linguistic research, as most previous projects have been carried out by historians, who have different research aims with this kind of data and therefore sometimes introduce edits to transcriptions to make the petitions more legible.

“In sociolinguistics, we are always interested in the social impact of research. This will be the first time we can make a contribution of this kind to understanding the sociolinguistic history of English.”

The post Unique corpus gives a voice to England’s laboring poor appeared first on dhCenter.

Hackathon generates new tools for digital text collections  

2021年11月5日 21:12

The first Distributed Text Services (DTS) Hackathon, co-sponsored by the dhCenter, yielded four award-winning and ready-to-implement ideas for improving uptake of DTS, which defines an API for working with collections of text as machine-actionable data.

The DTS specification defines a type of software interface called an API (application programming interface), which allows digital text corpora to be published in a standardized, uniform way that is more easily accessible and navigable across platforms.

According to the DTS community, which organized the online hackathon held from September 27th-October 8th, the specification enables machine-consumption of digital text collections, and can help publishers of such collections make their data Findable, Accessible, Interoperable and Reusable (FAIR).

“The DTS specification allows you to put a corpus online in such a way that users can access metadata about your documents, browse its sub-collections, retrieve pieces of text, etc.” says dhCenter member Matteo Romanello, a lecturer at the University of Lausanne and co-coordinator of the hackathon. “You can use it for browsing virtual collections, and at the document level, you can discover the table of contents, elements of text, and navigation endpoints.”

Making digital humanities research easier

Romanello explains that while another API specification, known as the Image Interoperability Framework (IIIF), already exists for images, DTS is the first such specification aimed at text corpora, which makes it a key tool for digital humanities researchers. He notes that a key feature of DTS is that it is generic, meaning that it can be used regardless of text language or format.

“The previous standards that inspired DTS were not generalizable. If publishers use DTS, any collection can be accessed in the same way, whether or not the text is canonical, or homogeneous. This means that it can be used to retrieve data about inscriptions or texts on papyrus, for example.”

In addition to improving the ecosystem of DTS data and tools, a main goal of the hackathon was to showcase the utility and versatility of the standard, which is currently implemented by five corpora, and promote its uptake by research institutions, digital heritage collections, and other publishers of digital texts.

Tools, data, and documentation

The hackathon had 26 registered participants who teamed up to develop a total of nine hack ideas. Four were selected for awards following evaluation by an international jury of experts (Thibault Clérice, École nationale des chartes; Berenike Herrmann, University of Bielefeld; Leif Isaksen, University of Exeter; Davide Picca, University of Lausanne; Elena Pierazzo, University of Tours; and Valeria Vitale, Turing Institute).

All contributions were divided into the categories of tool hacks (aimed at developing software tools that enrich or annotate DTS-ready corpora, and facilitating the publication of DTS-compliant corpora) and data hacks (aimed at exposing existing textual resources via a DTS-compliant API). A new hack category, documentation hacks, emerged during the hackathon.

Judging criteria included the extent to which the hack contributes to the DTS ecosystem of tools and resources, its potential to increase the adoption of DTS, the ease of use of the resulting tool, and its usefulness for the broader community.

🏆 Best tool hack (to consume DTS data): DTS2CSV by Laurent Millet Lacombe (MetaindeX), and Audric Wannaz (University of Basel)

This “extremely useful” hack was selected by the judges for its ability to make DTS more user-friendly. Lacombe and Wannaz developed a Python tool, which can run both as a command-line tool and as a graphical user interface (GUI), to convert content available via the DTS API into the tabular CSV format. The tool’s behaviour is fully configurable by means of a JSON configuration file. Having DTS data in CSV format can be very useful to further analyze and explore such content, and to compute statistics about DTS collections and texts.

🏆  Best tool hack (to produce DTS data): Reusable DTS web server for EpiDoc collection, by James Chartrand and Simona Stoyanova (Crossreads Project, University of Oxford) 

This proof-of-concept was described by the judges as “useful in its own right, as well as helpful for those looking to implement it for other datasets.” The team demonstrated that a DTS API can be created on top of a TEI-XML EpiDoc corpus, and stored on GitHub using a GitHub API. This is implemented by using the Express framework for Node.js, and a demo instance can already be tested.

🏆  Best data hack: DraCor-API to DTS, by Ingo Börner (University of Potsdam)

This hack has the potential to broaden the DTS audience by focusing on theater corpora. It implements a DTS endpoint into the DraCor-API, which consists of drama corpora in 11 languages. Although it is a custom API, Börner extended the DraCor-API to include support for the DTS specification, thus making all DraCor corpus content accessible via a DTS API.

🏆  Best documentation hack: DTS & IIIF integration by Robert Casties, (Max Planck Institute for the History of Science)

This concept was lauded by the judges for its novelty and creativity as well as its utility. The goal is to allow the DTS and IIIF standards to talk to each other, and to make the DTS API into an IIIF analog for text use on the internet. Casties explored some use cases for the integration of textual data served by the DTS API with image data served by the IIIF API. These initial use cases include the synchronized display of text and images for entire pages, as well as for individual sections of a text document.

The post Hackathon generates new tools for digital text collections   appeared first on dhCenter.

Online ‘unconference’ turns a critical eye on digital society

2021年10月28日 16:20

On October 21st and 22nd, the UNIL-EPFL dhCenter co-organized an online event with an unconventional format, which invited digital humanities researchers to engage in a dialogue on a variety of emerging issues with a common theme: the digitalization of culture and society.

The online-only event was organized with the collaboration of infoclio.ch; the Universities of Geneva, Basel, Bern, and Lausanne; and the Swiss Academy of Humanities and Social Sciences (SAGW/ASSH). The aim was to create an open space where scholars could network; share insights and challenges relating to current research projects and methods; and discuss trends in the fields of digital studies and digital humanities.

“The idea was to get an understanding of what topics people are interested in, and not just decide, top-down, what we need to talk about; but to try a bottom-up approach where people bring their ideas,” said co-organizer Tobias Hodel of the University of Bern.

The online format brought a meta-quality to the conference, which allowed participants to digitally vote on the program, present, speak, listen, and take notes, all while questioning and debating the technologies that made their interactions possible.

Conference, deconstructed

Three planned keynote talks saw historian and author Mar Hicks of the Illinois Institute of Technology bring a historical perspective to criticisms of computing; sociologist Dominique Cardon introduce the Sciences Po médialab’s “instrumental” approach to digital technology as a lens for social inquiry, plus open-source tools; and professor of journalism and digital media Nathalie Pignard-Cheynel of the University of Neuchâtel discuss the role of digital technology in the world of journalism.

Otherwise, the event’s structure and content was entirely determined by its participants. The bottom-up ‘unconference’ format allowed them to pitch their panel ideas on the first day of the event, which were then voted on to determine the final program.

Five concurrent panel slots each yielded three democratically selected discussion topics, which covered an extremely broad range of topics, including micropublication platforms, for example cache.ch; bias in machine learning algorithms; reproduction of artworks and other visualizations using digital methods; digital literacy; the democratization of archives via digitization; social media and images as sources of research data and tools; political discourse on YouTube; tools for managing digital data deluge; and even reasons for the non-use of digital technologies.

“Be critical”

Despite the breadth and depth of the discussions, co-organizer Enrico Natale of infoclio.ch summed up the conference proceedings with a closing statement, remarking on the common theme of ethical considerations that touched nearly every discussion of digital technologies and methods. He also pointed out a possible avenue for future unconferences.

“An open question for me is to what extent ‘the digital’ is a concept you can grasp. We’ve been using digital studies for ten years now, so what is the consistency of the concept of a ‘digital turn’ for humanities? This is a strong invitation to stay open-minded, and to look for interdisciplinary exchanges.”

Natale closed the proceedings with a reference to the event logo, which juxtaposes two semicircles which, while appearing to be totally different, actually contain an optical illusion of the same color.

“Keep your eyes open, and be critical,” he told the participants.

The post Online ‘unconference’ turns a critical eye on digital society appeared first on dhCenter.

dhCenter Scientific Committee Profile: Andreas Fickers

2021年9月28日 16:26

As a professor of Contemporary and Digital History at the University of Luxembourg and founding director of the Luxembourg Centre for Contemporary and Digital History (C²DH), Andreas Fickers advocates for hands-on training in the use of digital research tools for teachers as well as students of history and the social sciences.

Andreas Fickers brings his expertise in digital history and historiography, digital tools and methods, media history, and digital hermeneutics to the dhCenter Scientific Committee. He remarks that becoming a digital historian specializing in the application of epistemological questions to the digital age was a “natural progression” following his studies in history, philosophy, and sociology. After receiving a PhD from RTWH Aachen University in 2002, he worked as assistant professor of television history at Utrecht University (2003-2007) and as an associate professor for comparative media history at Maastricht University (2007-2013).

To help connect research processes with new forms of data-driven publishing, Fickers and his colleagues at the C²DH are launching a new Journal of Digital History, whose unique platform accommodates historical research in data-driven and digital formats. Submitted articles are each composed of a narrative layer, a hermeneutic layer, as well as a data layer.

He explains that at the C²DH, there is also a focus on the digital as it relates to public history and outreach to non-academic audiences.

“We reflect critically on how digital infrastructures, data, and tools change the way we think and narrate history. There is also a strong public history dimension that focuses on the development of new interfaces for storytelling, and for publishing research that can’t be showcased in a classical journal,” he says.

Breaking open black boxes

For Fickers, an important priority for the digital humanities is dispelling the “black boxes” of digital technologies, as they often appear to historians and social science researchers.

“Many historians don’t like technology, and are even afraid of technology. History students may choose to study history because they don’t like technology or the sciences, yet they use digital tools and infrastructure every day,” he says.

He therefore believes that hands-on training and experimentation with digital tools, such as text mining software, is essential to make these technologies more accessible.

“You very quickly learn there is no ‘push a button’ solution — you learn how hard it is to build a dataset, and to curate data in order to make it searchable, and turn it into an analytical object of study.”

Fickers adds that such training should be a priority not only for students, but also for teachers and established researchers. He cites the usefulness of interdisciplinary “trading zones” for bringing together different communities of practice to develop common vocabularies and understanding.

“Most of my colleagues are not digital-born, but analog-born, and they reproduce methods for doing research that are not really in sync with the digital possibilities we have right now. So training the trainers is for me an absolute priority; otherwise, we will widen that gap in the coming years.”

In addition to the C²DH, Fickers leads the Luxembourg National Research Fund Doctoral Training Unit ‘Digital History & Hermeneutics’, and co-coordinates the Trinational doctoral school. He’s also principal investigator of the DEMA project, Popkult60 and LuxTime. He is currently the Luxembourg national coordinator of DARIAH-EU (Digital Research Infrastructure for the Arts and Humanities) and member of the joint research board of Humanities in the European Research Area.

The post dhCenter Scientific Committee Profile: Andreas Fickers appeared first on dhCenter.

What it really means to do interdisciplinary research

2021年9月27日 21:04

On September 16th, the EPFL-UNIL Collaborative Research on Science and Society (CROSS) program hosted demonstrations of digital tools and proofs-of-concept resulting from recent mobility and digital humanities projects, as well as roundtables on the importance – and challenges – of supporting interdisciplinary research.

Interdisciplinary research has become a buzzword in academia, and is frequently used to enhance grant proposals and highlight conference themes. But when it comes to actually supporting such research within and between institutions, what resources are most helpful for researchers? What are the pitfalls to avoid and the benchmarks for success?

These were just some of the questions addressed during a series of talks and roundtable discussions organized by the EPFL College of Humanities (CDH) in collaboration with the UNIL-EPFL dhCenter on September 16th. The fruits of research projects on the recent CROSS grant themes of Mobility (2020) and Digital Humanities (2021) were also presented.

“We support new constellations of researchers who wouldn’t normally work together; this is the intersection of disciplines where new science happens,” said CDH Dean Béla Kapossy as he opened the day’s event. He also announced the theme of the 2022 CROSS call on “Responsible Innovation”.

Challenges give rise to opportunities

A major theme was the critical assessment of challenges and opportunities encountered in the daily practice of interdisciplinary research. While many of the CROSS participants expressed similar frustrations – such as developing common vocabularies and applying methodologies across fields – it also became clear that overcoming such challenges can yield unique and valuable opportunities.

For example, Laboratory for the History of Science and Technology (LHST) head Jérôme Baudry observed that for his investigation of how users navigate online research platforms, the challenge was to avoid partitioning the project into “theoretical” and “methods” sides, which inspired new approaches to developing hypotheses. “We tried to go beyond starting with anthropological concepts, and then using digital humanities to inform or confirm a thesis defined by the humanities,” Baudry said. He added that having to communicate across domains also encourages researchers to better clarify and even question the operationalization of their own concepts.

Similarly, Marie-Hélène Côté of the project “Names of Lausanne: the evolution of family names in administration records 1803-1900” noted that as a linguistic expert, being challenged to understand issues and research questions from other domains, like computer science, created the opportunity to “diversify the nature of questions that can be asked”.

Enabling interdisciplinary spaces

A final roundtable featured EPFL professor and Associated Vice President for Research Ambrogio Fasoli and UNIL CoLaboratoire director Alain Kaufmann, who outlined institutional and infrastructural challenges to supporting interdisciplinary research, and efforts to overcome them. Both speakers emphasized the importance of creating adequate space, both theoretical and physical, for such investigation to flourish. While Kaufmann argued for agile administrative procedures and more physical spaces for researchers to engage with citizens, Fasoli advocated larger sums and longer durations for interdisciplinary research grants.

Fasoli concluded by highlighting some problems that remain unresolved, such as identifying metrics for evaluating careers that extend across multiple fields. However, he notably cautioned against prescribing too much what interdisciplinary research should and shouldn’t do or be.

“The less we try to define what needs to be done, the better. We need to enable spaces, mechanisms, and instruments where people can come together and create excellence,” he said.

View all CROSS projects on MOBILITY (2020) as well as short video descriptions on YouTube.

View all CROSS projects on DIGITAL HUMANITIES (2021) as well as short video descriptions on YouTube.

Interdisciplinary proofs-of-concept

A highlight of the CROSS event was the presentation of the following digital humanities tools and prototypes:

Yumeng Hou, Davide Picca, and Alessandro Adamou presented a first version of their martial arts ontology, developed as part of “CROSSINGS : computational interoperability for intangible cultural heritage”. The researchers aim to create an ontological model of intangible cultural heritage – in this case, the martial arts – using data from the Hong Kong Martial Arts Living Archive. They are combining information about Kung Fu with Chinese folklore to create a linked knowledge graph for Kung Fu heritage.

Robert West presented the project “ACCOMOJI. Emoji accommodation in Swiss multilingual computer-mediated conversations”, and demonstrated their online citizen science questionnaire. Participants are invited to annotate an open dataset of Swiss WhatsApp chats by indicating their understanding of the role, positivity, and intensity of the emojis used. The researchers are combining linguistics and data science to understand how emoji use converges or diverges over time.

Isabella di Lenardo and Marie-Hélène Côté presented a video demo of their searchable database, Names of Lausanne, as part of the project “Names of Lausanne: the evolution of family names in administration records 1803-1900”. The database, which contains millions of digitized cells of data from handwritten census archives, will allow researchers and citizens to explore historical family and place names from the city of Lausanne.

Fabian Moss introduced a proof-of-concept transcription pipeline for “Digitizing the dualism debate: case study in the computational analysis of historical music sources”. The goal of this project is to generate an interactive website where users can browse data on the heated “dualism debate” among 19th-century German music theorists. By transcribing the notes and texts by key figures in this debate, the researchers hope to enable computational analysis of historical music theory sources.

Maud Reveilhac, Tugrulcan Elmas and Karl Aberer presented a map of actors and interaction networks drawn from Twitter-based political conversations about freedom fighters returning to Europe from Syrian conflict, as part of “Framing analysis of online discourse of returning foreign fighters and their families”. They used natural language processing to study tweets in English and French, and to analyze how fighters were portrayed or “framed” across regions, actors and influencers of online discourse.

 

 

 

 

 

The post What it really means to do interdisciplinary research appeared first on dhCenter.

6 questions à … Florence Andreacola

2021年9月21日 21:53

Florence Andreacola est maître de conférences à l’Université Grenoble Alpes. Ses recherches portent sur la place de la culture numérique au musée et lors de l’expérience de visite. Pour y parvenir, elle utilise et interroge des techniques de recherches interdisciplinaires entre informatique, sciences des données et sciences humaines et sociales. Elle interviendra lors de la première rencontre « Musées et médiations numériques », qui aura lieu le mardi 21 septembre au Musée cantonal des Beaux-Ars de Lausanne.

 

Pouvez-vous me parler de votre parcours ?

J’ai commencé par faire un master en histoire de l’art et archéologie à l’Université de Liège, en me spécialisant en muséologie. Après mon master, j’ai voulu approfondir la question des publics et je savais qu’il y avait à Avignon un pôle de muséologie qui travaillait spécifiquement sur la réception et qui traitait l’exposition et la muséologie dans une approche communicationnelle. J’ai donc fait ma thèse avec Marie-Sylvie Poli sur les usages du numérique, par les visiteurs, dans leurs pratiques de visite. Le but n’était pas d’étudier les dispositifs numériques présents au musée, mais plutôt d’observer le comportement des visiteurs avec leurs propres outils numériques (smartphone ou appareil photo) pendant l’expérience de visite. Mais l’avant-visite m’intéressait aussi : est-ce que le visiteur va visiter le site du musée, par exemple ? Puis, après la visite, est-ce qu’il va retourner sur le site ou passer directement à autre chose ?

A cette période, j’enseignais aussi la muséologie en master. Ensuite, j’ai été recrutée à l’Université de Grenoble Alpes pour enseigner dans le cadre de la formation « Métiers du multimédia et de l’Internet » (MMI). Je me suis donc un peu éloignée du domaine culturel pour me concentrer sur la dimension numérique et communicationnelle. J’accompagne les étudiants dans l’utilisation des outils numériques, dans la construction d’une stratégie de communication, mais qui n’est pas forcément destinée à promouvoir une institution culturelle.

Vos recherches actuelles ?

Je continue à travailler sur les musées et le numérique. Après ma thèse, j’ai entamé une réflexion méthodologique sur la collecte des données que les visiteurs produisent et diffusent sur les plateformes en ligne telles que Flickr, Instagram, Facebook ou Twitter. Mon but était d’analyser les contenus et les images produites, mais il fallait d’abord lever les verrous méthodologiques.

Maintenant, j’ai résolu ces problèmes de collecte et j’ai aussi pu affiner les méthodes d’analyse. Depuis ma thèse, j’essaie donc d’associer des techniques d’analyse statistique assez poussées, des méthodes qui relèvent de l’intelligence artificielle par exemple, à des analyses sémiologiques plus classiques en communication ou en histoire de l’art.

Récemment, j’ai pu collecter un gros corpus d’images issues du confinement. Il s’agit de ces fameux montages, avec une œuvre d’art d’un côté et, de l’autre, la mise en scène de cette œuvre par un amateur. C’est une collection de plusieurs milliers d’images

J’ai un financement pour analyser ces images d’un point de vue sémiotique, sur un corpus réduit. En parallèle, je souhaite aussi construire un modèle d’analyse automatique de ces images. Notre modèle sémiotique peut-il être dupliqué ? Peut-on le faire tourner avec un outil d’apprentissage automatique ? Comment reformuler une analyse assez complexe en un algorithme nécessairement  plus simple ? Pour quel résultat ? C’est l’objectif du projet.

Quelle est la dimension la plus surprenante de vos recherches ?

Grâce au web, on est dans la tête des visiteurs ! C’est un peu caricatural, mais avant qu’internet soit démocratisé, les gens visitaient les musées, ils vivaient des émotions devant une œuvre ou pas… Mais il n’y avait aucune trace matérielle de cette expérience de visite. Et là, le fait que les visiteurs documentent par eux-mêmes leur visite, beaucoup de données sont créés sur l’activité de visite et sont matérialisées quelques part. En tant que chercheuse, je trouve formidable d’avoir accès à des aspects de la réception de la visite.

Cependant, pour avoir accès à ces données, cela se complique très vite… Elles sont en effet très dispersées : sur l’appareil de l’usager ou sur un serveur privé, par exemple. Mais elles possèdent aussi un statut différent en termes de propriété de l’information. Alors comment faire en tant que chercheuse avec une démarche éthique pour collecter ces données ? Et aussi d’un point de vue technique : comment collecter des quantités assez importantes de données pour tendre vers une forme de représentativité ? On se retrouve aussi face au problème de devoir passer par des canaux comme Instagram qui verrouillent toujours plus le téléchargement en quantité des données.

Pouvez-vous nous donner un avant-goût de votre intervention de mardi prochain ?

Tout d’abord, j’aimerais faire un tour d’horizon des différentes formes de dispositifs numériques qui habitent le musée, à la fois à l’intérieur et hors du musée, et qui participent à l’expérience de visite.

Ensuite, je souhaite revenir sur les injonctions et promesses du numérique au musée. Que nous soyons visiteur ou professionnel des musées, nous faisons face à des injonctions à pratiquer le numérique, parce que c’est là que « tout se passe »… Il y a en même temps beaucoup de promesses qui accompagnent le numérique. Donc nous sommes pris entre ces deux feux, tout en sachant que tout n’est pas « numérisable », qu’un musée « tout numérique » n’est pas forcément enviable.

J’envisage aussi d’aborder la dimension stratégique de la mise en place d’un projet numérique dans une institution muséale, en se concentrant sur les contenus. Il y a une méthode sur laquelle je souhaite m’attarder, la stratégie du storytelling transmédia, qui est d’ailleurs déjà pratiquée dans les musées. En effet, la conception d’une exposition implique toujours une construction d’un récit multi-supports. On le développe pour l’expo, mais il y a aussi un catalogue, des conférences, etc. Ce format existe donc déjà dans les musées. Avec la théorisation du modèle de communication transmédia, il est possible de le pousser un peu plus loin, et de mieux le coordonner, en intégrant un peu plus efficacement le numérique.

Enfin, j’aimerais parler des modèles de pilotage et de développement d’un projet numérique. Comment va-t-on organiser le musée, l’architecture de l’information au sein de son institution ? Quels outils peuvent être utilisés ou déployés sur son site pour permettre une gestion de projet intégrée ou, au contraire, une gestion silotée par métier ? On touche ici à la question de l’évaluation. Comment analyser les données numérisées ?

J’aimerais terminer en évoquant l’impact du statut du musée sur ses modèles de gestion : dans quelle mesure un musée est-il libre d’implémenter un modèle de gestion intégré en fonction de sa tutelle, ou de son indépendance de gestion ?

Une recommandation de livre pour notre communauté ? Un compte twitter à suivre ? Une ressource numérique ?

  • L’ouvrage le plus stimulant que j’ai lu dernièrement :

Winkin (Yves). 2020. Ré-inventer les musées ? Paris : MkF éditions (Les Essais médiatiques). Suivi d’un dialogue sur le musée numérique avec Milad Doueihi.

Réinventer les musées ? En apparence, tout va bien : il n’y a jamais eu autant de musées en France et jamais autant de monde dans les musées. Mais quelques grands musées ne cachent-ils pas la forêt ? Des musées uniquement fréquentés par des groupes scolaires ou du troisième âge ; se reposant sur leurs collections sans se poser trop de questions ?

La mission récente du Ministère de la Culture « Musées du XXIe siècle » offrait de multiples pistes d’action sans épuiser le sujet. Yves Winkin propose ici de prolonger cette réflexion en croisant son expérience d’anthropologue de la communication et de directeur de musée pour proposer une autre approche, fondée sur l’invention de nouveaux rituels – autant de cérémonies publiques pour faire entrer les musées dans le XXIe siècle.

À travers 12 rituels concrets, l’auteur nous invite à repenser le rôle des musées dans notre société.

  • Instagram:

Margaux Brugvin, une instagrammeuse féministe qui publie chaque mois des Reels. Elle présente une artiste femme méconnue, ou une exposition dont elle nous fait la visite. Un relai précieux pour parler des expositions.

Pour finir, voudriez-vous partager quelques-unes de vos publications ?

Andreacola, F. (dir.), Doueihi, M. (cont.). 2020. « Musées et mondes numériques », Culture & Musées, 35.

https://journals.openedition.org/culturemusees/4353

 

Andreacola, F. 2020. « Une nuée de musées numériques individuels et fragmentés ». La lettre de l’OCIM, 189.

https://fr.calameo.com/read/005777060b3354a7710fb

 

Andreacola, F. (2020). « Renouveler l’analyse de l’expérience culturelle des visiteurs de musées grâce à l’intelligence artificielle ? » In M.-S. Poli, Chercheurs à l’écoute. Méthodes qualitatives pour saisir les effets d’une expérience culturelle. Québec : PUQ.

https://www.puq.ca/catalogue/livres/chercheurs-ecoute-3815.html

The post 6 questions à … Florence Andreacola appeared first on dhCenter.

dhCenter Scientific Committee Profile: Maristella Agosti

2021年9月7日 22:58

 dhCenter Scientific Committee Profile

Maristella Agosti, Professor Emeritus, University of Padua

Maristella Agosti is an expert in information retrieval, digital libraries and archives, and digital cultural heritage. A pioneer in the field of information retrieval since before the advent of the web, Agosti brings her passion for the translation of research outcomes into best practices for archiving and documentation, as well as her technical expertise, to the dhCenter Scientific Committee.

In 2020, Maristella Agosti retired after more than 20 years as a full professor of computer science in the University of Padua’s Department of Information Engineering. She earned her laurea degree in statistics in 1975 from the same institution, with a thesis on algorithms for automatic classification.

From the beginning of her career, Agosti has focused on improving standards and practices for libraries and archives, starting with improving the then- “primitive” automation systems for background information retrieval for end users.

More context; less fragmentation

Agosti notes that turning research findings into concrete products and services for libraries and archives has always been a priority for her; an interest that has led her to become increasingly engaged with the social and human sciences over the course of her career.

“As I came into contact with the problems that needed to be solved to make documents and information in libraries, archives, and documentation centres more accessible to users, I began to study and get to know the characteristics of many humanistic sectors, and I have worked more with researchers of the humanities sector,” she says.

She adds that she believes the dhCenter has a responsibility to humanities researchers and students to provide tools and standards of digital archiving, to make available fragmented data and documentation of the last 20-30 years.

“I would like to see the field of information retrieval provide not only useful information to the user, but also the context in which this information was created, as well as related information. For example, using linked data methods, the records of different cultural heritage institutions could be connected to each other in a sort of structure of linked archives. With these networks and with new types of citation graphs, users could explore larger volumes of data with greater context.”

Looking ahead to her involvement with the dhCenter, Agosti says she hopes to use her expertise to aid the center’s programs based on her experience designing new research projects in Italy and across Europe.

“I am thinking in particular of the projects that I have implemented over the years for innovative access to digital information available in library automation and archive systems, and possibly also creating useful Virtual Research Environments (VRE).”

An information retrieval pioneer

Following her doctoral studies, Agosti did research in the UK, specializing in databases and information retrieval. As part of this work, she designed the first Online Public Access Catalogue (OPAC) in Italy, which was available before the world wide web. In 1987, she established the Information Management Systems (IMS) group, which was Italy’s first academic research group on information retrieval and digital libraries. She went on to work with other European colleagues on digital libraries in the 1990s, developing one of the first systems for making multimedia objects available.

In addition to serving on the editorial boards of numerous journals on information processing and retrieval, Agosti has herself published widely on the subject, as well as on user engagement and accessing digital cultural heritage collections. For her groundbreaking work, Agosti received the prestigious Tony Kent Strix Award, which honors outstanding contributions to the field of information retrieval, in 2016.

The post dhCenter Scientific Committee Profile: Maristella Agosti appeared first on dhCenter.

dhCenter Scientific Committee Profile: Jean-Philippe Cointet

2021年9月7日 20:57

dhCenter Scientific Committee Profile

Jean-Philippe Cointet, researcher, Sciences Po médialab

A self-described interdisciplinary “hybrid”, Jean-Philippe Cointet is an engineer and computer scientist by training, with a diverse research background in science and technology studies and sociology. He brings his experience with text corpora as well as interdisciplinary research more globally to the dhCenter Scientific Committee.

Jean-Philippe Cointet’s research interests range from social media analysis to political process mapping. He received his PhD in Complex Systems from the Ecole Polytechnique in Paris in 2009, and defended his habilitation in social sciences at the Ecole Normale Supérieure in 2017. Since 2017, he has been a researcher specializing in text analysis and the sociopolitical dynamics of corpora at the médialab of the Paris Institute of Political Studies (Sciences Po). Specifically, he designs innovative computational sociology methods, including original methods for modeling digital traces to map public space and its dynamics.

Cointet coordinates the project GOPI (La géométrie des problèmes publics), funded by the Agence nationale de recherche, which designs new word-embedding methods for the modeling of public issues. He is also affiliated with Columbia University’s INCITE research center.

From computer science to CorText

After his PhD, Cointet worked at the French social science lab LISIS, where he first became interested in visualizing and modeling historical changes in scientific dynamics. It was in this context that he began using press and social media data to monitor societal debates on issues related to science and technology.

Today, Cointet is one of the primary designers of the digital text analysis platform CorText, which uses artificial intelligence to analyze data corpora from the social sciences.

He says that he hopes to bring his expertise in natural language processing (NLP) techniques, machine learning algorithms, and network analysis to the dhCenter.

“I can use my prior experience to address questions asked by sociologists or political scientists when they are confronted with a text corpus, such as: When and how can one mix both competencies when designing and developing such a tool? How can one recruit new contributors who are both users and co-designers of future capacities?”

Cementing interdisciplinary collaborations

As a member of the dhCenter Scientific Committee, Cointet emphasizes the importance of support for interdisciplinary research, noting that while this is an essential component of the digital humanities, it can also be a source of frustration.

“Even where there is a genuine desire to collaborate, the practices and norms in the social sciences and humanities and in engineering and computer science are so distant that it can take longer to get on the same page and align expectations,” he says.

“My experience is that these kinds of collaborations can take quite a long time to cement into fruitful interactions. From the dhCenter’s perspective, such initiatives should be given support over a significant period of time, so that new teams can build a common understanding.”

The post dhCenter Scientific Committee Profile: Jean-Philippe Cointet appeared first on dhCenter.

❌