The Stack Overflow Podcast

The logos, ethos, and pathos of your LLMs

10 snips
Feb 10, 2026
Tom Griffiths, Princeton professor bridging psychology and computer science, and author of The Laws of Thought. He traces logic from Aristotle and Boole to modern neural nets. Short takes cover why transformers learn language, how human inductive biases differ from LLMs, and what constraints might give machines conscious-like phenomenology.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Early Programming Sparked By Illness

  • Tom Griffiths learned programming via text-based multi-user dungeons while recovering from an illness in high school.
  • That experience sparked his shift from arts subjects to combining math, philosophy, and computing to study minds.
INSIGHT

Formalizing Thought Through Mathematical Laws

  • The project of turning thought into a formal, machine-executable system stretches from Aristotle through Leibniz to Boole and beyond.
  • Boole's algebraic view seeded modern logic and the idea of 'laws of thought' as mathematical principles.
INSIGHT

1956 Meeting Births Cognitive Science

  • The 1956 MIT symposium helped birth cognitive science by combining mathematical models with behavioral data.
  • The meeting shifted psychology from strict behaviorism to mathematized hypotheses tested against behavior.
Get the Snipd Podcast app to discover more snips from this episode
Get the app