Brain Inspired

BI 233 Tom Griffiths: The Laws of Thought

26 snips
Mar 11, 2026
Tom Griffiths, Princeton cognitive scientist and author of The Laws of Thought, explores how logic, neural networks, and probability form a trio for understanding cognition. He traces historical ideas, contrasts algorithms and implementations, and discusses resource-rationality, amortized inference, and how constraints shape both human minds and modern AI.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Three Pillars Fit Marr's Levels

  • Use Marr's levels: logic and probability set the computational (ideal) goals, neural networks provide algorithmic approximations.
  • Griffiths frames large language models as neural approximators of probabilistic inference over symbolic-structured data.
ANECDOTE

Why Early Neural Networks Lost Momentum

  • Early neural network history involved Marvin Minsky building networks then abandoning them as impractically large, which pushed him toward symbolic AI.
  • Rosenblatt persisted with biologically motivated perceptrons and defined early learning algorithms despite skepticism.
INSIGHT

Limits Of Logic For Inductive Problems

  • Formal logic excelled at deductive tasks (chess, theorem proving, grammar) but failed for underdetermined, inductive problems like perception and language learning.
  • Attempts to force inductive problems into deductive frameworks led to systematic shortcomings of early symbolic AI.
Get the Snipd Podcast app to discover more snips from this episode
Get the app