GHisBERT - A language model for historical German
Christin Beck (Konstanz)

While static embeddings have dominated computational approaches to lexical semantic change for quite some time, recent approaches try to leverage the contextualized embeddings generated by large language models for identifying semantic shifts in historical texts. However, despite their usability for detecting changes in the more recent past, it remains unclear how well language models scale to investigations going back further in time, where the language differs substantially from the training data underlying these models. In this talk, I will present GHisBERT, a BERT-based language model trained from scratch on historical data covering all attested stages of German (going back to Old High German, c. 750 CE), which is the outcome of collaborative work with Marisa Köllner from the University of Tübingen. I will show that in a lexical similarity analysis of ten stable concepts, our GHisBERT model performs better than an unmodified and a fine-tuned German BERT base model in terms of assessing inter-concept similarity as well as intra-concept similarity over time. Overall, this argues for the necessity of pre-training historical language models from scratch when working with historical linguistic data.