阅读视图
A Review of Manuale di letteratura elettronica
Reconfiguring the Ideal Order: Ablation and Field Formation in the Twentieth-Century Nigerian Novel in English
Infrastructures of Listening: The ManoWhisper Podcast Analysis Pipeline
The Eras Tour: Machine Learning for Dating Historical Texts from Greco-Roman Egypt
Digital bioethics: exploring an emerging field
Med Health Care Philos. 2026 Apr 16. doi: 10.1007/s11019-026-10347-1. Online ahead of print.
ABSTRACT
The uptake of social science methods by bioethics significantly expanded its methodological spectrum, raising new theoretical, methodological, and practical questions. Recently, we are witnessing another trend, adding advanced data science methods to bioethics' toolkit to aid, for example, in online data analysis, support scholarly writing, and inform clinical ethics. This article explores the emerging field of Digital Bioethics across its dimensions by analysing the tangled relationship between topics and methods, highlighting intersections between Digital Bioethics and Bioethics of the Digital, and advocating for a methods-based definition of the field. The use of advanced data science methods within bioethics must be interpreted in the context of the use of Artificial Intelligence (AI) in health care. At the same time, it presents unique opportunities and challenges. Defining, and thus demarcating, Digital Bioethics can create support for the new field but also requires navigating trade-offs. To do so, we take four kindred academic fields as points of comparison (Digital Humanities, Experimental Philosophical Bioethics, computational medicine and digitised biology) to analyse what each of them teaches for critically assessing and further developing Digital Bioethics. The article discusses potential pitfalls and concludes with recommendations on how the field can fully develop its potential to promote bioethical research and argument. Furthermore, the article discusses how a critical reflection of the use of AI methods within bioethics itself will also contribute to the ethical oversight of increasingly AI-driven branches of healthcare.
PMID:41989660 | DOI:10.1007/s11019-026-10347-1
Striving towards automated writing – Views on authorship in story generation research
Large language models for history, philosophy, and sociology of science: Interpretive uses, methodological challenges, and critical perspectives
Stud Hist Philos Sci. 2026 Mar 30;117:102151. doi: 10.1016/j.shpsa.2026.102151. Online ahead of print.
ABSTRACT
This paper examines large language models (LLMs) as research tools in the history, philosophy, and sociology of science (HPSS). Because LLMs can work directly with heterogeneous, unstructured texts and capture meaning-relevant associations from usage patterns, they offer new ways to bridge close reading and corpus-scale analysis, challenging the idea that computational scale and interpretive nuance must trade off. We provide a compact primer on LLMs, covering the main components of their neural network architecture, the differences between generative and full-context models, and adaptation strategies such as fine-tuning, prompt-based learning, and retrieval-augmented generation (RAG). Building on this foundation, we analyze how LLMs recast three classic methodological problems in HPSS: working with historically messy data, detecting and interpreting large-scale patterns, and modeling scientific change over time. Across these areas we synthesize recent work in HPSS and adjacent fields, and we clarify how LLM outputs can function as exploratory prompts, as inputs to more structured pipelines, or as evidence under stricter validation and documentation. We conclude with four lessons: 1) model choice embeds interpretive trade-offs, 2) responsible use requires LLM literacy, 3) HPSS should develop its own tasks and evaluation practices, and 4) LLMs should extend rather than replace established interpretive methods. We also situate these methodological questions within broader concerns about platform dependence, accountability, and the responsibilities attached to research infrastructures. Finally, we argue that HPSS is well positioned to both use LLMs and to interrogate what counts as explanation, evidence, and responsible use in interpretive research.
PMID:41916166 | DOI:10.1016/j.shpsa.2026.102151