Anthology of Computers and the Humanities · Volume 3

Ground Truth Generation for Multilingual Historical NLP using LLMs

Clovis Gladstone1 ORCID , Zhao Fang2 ORCID and Spencer Dean Stewart3 ORCID

  • 1 ARTFL Project, Romance Languages and Literatures, University of Chicago, Chicago, USA
  • 2 Department of History, University of Chicago, Chicago, USA
  • 3 Libraries and School of Information Studies, Purdue University, West Lafayette, USA

Permanent Link: https://doi.org/10.63744/UWoDSxRk90Vn

Published: 21 November 2025

Keywords: large language models, LLMs, Natural Language Processing, NLP, historical NLP, multilingual NLP

Abstract

Historical and low-resource NLP remains challenging due to limited annotated data and domain mismatches with modern, web-sourced corpora. This paper outlines our work in using large language models (LLMs) to create ground-truth annotations for historical French (16th–20th centuries) and Chinese (1900–1950) texts. By leveraging LLM-generated ground truth on a subset of our corpus, we were able to fine-tune spaCy to achieve significant gains on period-specific tests for part-of-speech (POS) annotations, lemmatization, and named entity recognition (NER). Our results underscore the importance of domain-specific models and demonstrate that even relatively limited amounts of synthetic data can improve NLP tools for under-resourced corpora in computational humanities research.