Many Minds

What can AI teach us about the mind?

Mar 26, 2026
Gary Lupyan, a UW–Madison psychologist who studies language and cognition, and Mike Frank, a Stanford psychologist focused on language learning in children, explore what modern AI reveals about the mind. They compare LLMs to child learning, debate data gaps and grounding, examine language as a potent training signal, and probe model reasoning, confabulations, and the limits of pattern matching.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Predictive Training Yields Emergent World Models

  • Modern LLMs learn world models and causal relations from language alone despite being trained on next-token prediction.
  • Mike Frank emphasizes that strong predictive performance can yield emergent causal and semantic knowledge not predicted by a priori loss-function assumptions.
INSIGHT

Feeding Models The Same Stimuli As Humans

  • Stimulus computability lets researchers feed models the exact stimuli humans saw, aligning computational tests with human experiments.
  • Mike Frank calls this a qualitative advance enabling models to receive full vignettes or natural images rather than hand-coded binary features.
ADVICE

Test Models With Infant-Style Controlled Experiments

  • Treat models like infants: design tightly controlled, diagnostic experiments before inferring shared representations.
  • Mike Frank recommends applying developmental experimental rigor to avoid assuming language-like internal representations.
Get the Snipd Podcast app to discover more snips from this episode
Get the app