Because Language - a podcast about linguistics, the science of language.

121: Learning from LLMs (with Adele Goldberg)

Jun 29, 2025
In this engaging discussion, Professor Adele Goldberg, a pioneering constructionist and Princeton psychology professor, delves into the intricacies of large language models (LLMs) and their parallels to human language. She explores the concept of constructions—form-meaning pairings—and how they inform grammar. The conversation reveals how LLMs absorb biases and learn context, while also highlighting frequency effects and the evolution of meaning over time. Prepare for a thought-provoking look at language that will change how you think about AI and communication!
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ADVICE

Teach Constructions Not Empty Trees

  • Avoid teaching grammar as separate phrase-structure rules detached from words; emphasize how meaning and verb preferences shape syntax.
  • Teach constructions and lexical tendencies rather than building sentences from abstract trees first.
ANECDOTE

Ate His Way Through Example

  • Adele uses the "ate his way through" example to show constructions let many verbs appear in the same pattern.
  • Even normally transitive verbs like devour can appear without an explicit object inside a construction.
INSIGHT

Text Statistics Can Yield Meaning

  • LLMs show it's possible to derive rich, context-dependent language knowledge from massive text statistics alone.
  • Goldberg admitted she was surprised they could handle meaning without real-world experience.
Get the Snipd Podcast app to discover more snips from this episode
Get the app