Language Models as Linguistic Glass Boxes
Kyle Mahowald (Austin)

While it's common to think of language models as black boxes with opaque innerworkings, I will argue that if we train our own models on controlled data sets, they can be powerful tools for linguistics. I will give some general thoughts on the role of LLMs in linguistic theory, along with illustrations from my own work exploring how language models learn rare linguistic constructions.