Inner Cosmos with David Eagleman

Ep147 "Can we engineer human thought?" with Tom Griffiths

11 snips
Mar 30, 2026
Tom Griffiths, a Princeton cognitive scientist who studies probabilistic models and human learning, joins to explore whether thought can be mathematized. He traces the history from early logicians to modern neural networks. He compares how children learn from little data versus AI’s vast needs and highlights probability, inductive biases, and the mix of symbolic and network approaches.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Bayes' Rule Is The Grammar Of Uncertainty

  • Probability theory (Bayes' rule) provides the formal language for reasoning under uncertainty and models how beliefs update with evidence.
  • Griffiths frames beliefs as probabilities and explains how new data (storm clouds) should increase belief in rain according to Bayes' rule.
INSIGHT

Irrationality Often Reflects Resource Limits

  • Apparent human 'irrationality' can be reframed as approximately rational behavior under tight computational/resource constraints.
  • Griffiths argues rationality should be judged by best strategies given finite compute, not ideal Bayesian agents with infinite resources.
INSIGHT

Human Minds Were Shaped By Harsh Constraints

  • Human intelligence is an adaptation shaped by constraints like limited lifespan, compute, and low communication bandwidth, producing solutions optimized for those constraints.
  • Language, writing, and institutions evolved as ways to circumvent individual limitations and offload cognition.
Get the Snipd Podcast app to discover more snips from this episode
Get the app