Machine Learning Street Talk (MLST)

#107 - Dr. RAPHAËL MILLIÈRE - Linguistics, Theory of Mind, Grounding

10 snips
Mar 13, 2023
In this engaging discussion, Dr. Raphaël Millière, a Columbia University lecturer in philosophy, delves into the intersection of AI, linguistics, and cognition. He explores how deep learning challenges traditional notions of self-representation and consciousness. Millière tackles the complexities of mimicry in AI, uncovering biases it may perpetuate. He also analyzes the limitations of large language models, emphasizing the grounding problem and the intricacies of human-like understanding, raising thought-provoking questions about the future of AI and its ethical implications.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Semantic Competence vs. Understanding

  • 'Understanding' in language models should be viewed as semantic competence, encompassing lexical and structural competence.
  • Lexical competence can be referential (mapping words to real-world objects) or inferential (relating words within language).
ANECDOTE

SHRDLU vs. Language Models

  • SHRDLU, a classical symbolic AI, demonstrates referential competence by manipulating virtual objects.
  • Language models lean towards inferential competence, deriving meaning from co-occurrence patterns.
INSIGHT

Compression and Generalization

  • Language models compress data by learning underlying patterns, enabling generalization.
  • Lossy compression analogy is misleading, as inference isn't simply regurgitating degraded data.
Get the Snipd Podcast app to discover more snips from this episode
Get the app