unSILOed with Greg LaBlanc

635. The Psychology of Computers with Tom Griffiths

13 snips
Mar 30, 2026
Tom Griffiths, Princeton professor studying computation and the mind. He traces the 50‑year convergence of psychology and computer science. He compares artificial and natural minds, explains neural networks and transformers, and explores inductive bias, data needs, and how language and culture shape AI. Conversations touch on modeling cognition, biases, and the future of specialized versus general AI.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

LLMs Learn By Predicting Next Tokens

  • Large language models solve an autoregressive prediction problem: predict the next token from prior tokens, learning probability distributions from text.
  • This makes them effective at language but mismatched with a child's multimodal, social, and embodied language learning.
ADVICE

Engineer Inductive Biases Not Just Scale Data

  • Build inductive biases into models instead of only scaling data; techniques include synthetic pre-training and meta-learning.
  • Griffiths' lab uses meta-learning to create initial weights that let networks learn from small data.
INSIGHT

Autoregression Creates Predictability Biases

  • LLMs inherit functional biases from their training objective and data frequency, causing predictable but sometimes undesirable errors.
  • Example: GPT-4 favors the more common answer (30) over a rarer correct answer (29) when both are plausible.
Get the Snipd Podcast app to discover more snips from this episode
Get the app