
Me, Myself, and AI Connecting Language and (Artificial) Intelligence: Princeton’s Tom Griffiths
64 snips
Jan 20, 2026 In this engaging discussion, Tom Griffiths, a Princeton professor specializing in AI and cognitive science, dives into his book, The Laws of Thought. He explores how mathematics has historically shaped our understanding of both human and machine intelligence. Tom elaborates on three frameworks—rules, neural networks, and probability—that drive modern AI and connects these concepts to language. He emphasizes the unique human skills of judgment and metacognition while discussing the limits of large language models and the future of human-AI collaboration.
AI Snips
Chapters
Books
Transcript
Episode notes
Neural Nets As Continuous Representations
- Neural networks model continuous representations where concepts are regions in feature spaces.
- They let systems learn mappings between spaces, solving learning problems symbolic logic couldn't.
Probability Ties Learning To Uncertainty
- Probability and statistics explain inductive inference and how to reason under uncertainty.
- They help clarify why modern AI approaches, like LLM training, actually work.
Three Levels Of Explaining Minds
- Different math systems explain intelligence at different levels: computational, algorithmic, and implementational.
- Logic and probability give ideal solutions while neural nets show plausible algorithms to approximate them.




