Brain Inspired BI 233 Tom Griffiths: The Laws of Thought
26 snips
Mar 11, 2026 Tom Griffiths, Princeton cognitive scientist and author of The Laws of Thought, explores how logic, neural networks, and probability form a trio for understanding cognition. He traces historical ideas, contrasts algorithms and implementations, and discusses resource-rationality, amortized inference, and how constraints shape both human minds and modern AI.
AI Snips
Chapters
Books
Transcript
Episode notes
Three Pillars Fit Marr's Levels
- Use Marr's levels: logic and probability set the computational (ideal) goals, neural networks provide algorithmic approximations.
- Griffiths frames large language models as neural approximators of probabilistic inference over symbolic-structured data.
Why Early Neural Networks Lost Momentum
- Early neural network history involved Marvin Minsky building networks then abandoning them as impractically large, which pushed him toward symbolic AI.
- Rosenblatt persisted with biologically motivated perceptrons and defined early learning algorithms despite skepticism.
Limits Of Logic For Inductive Problems
- Formal logic excelled at deductive tasks (chess, theorem proving, grammar) but failed for underdetermined, inductive problems like perception and language learning.
- Attempts to force inductive problems into deductive frameworks led to systematic shortcomings of early symbolic AI.










