Into the Impossible With Brian Keating

Princeton Scientist: We Don't Understand AI - Tom Griffiths - #553

84 snips
Apr 29, 2026
Tom Griffiths, Princeton professor bridging psychology and computer science, tackles why AI still cannot learn like a child. He explores the 250-year thread from Boole to Turing, why scale alone will not close the human-machine gap, and why sycophantic AI — not hallucinations — should worry us. Short, sharp takes on language learning, inductive bias, and what a child’s mind reveals about true intelligence.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes

Chomsky Turned Language Into A Generator Problem

  • Chomsky reframed language as a generative math problem: a grammar is a rule-based generator for the set of valid sentences.
  • That shift made it possible to ask precise learnability and structure questions about language.

Leibniz's Proto Embeddings Story

  • Gottfried Wilhelm Leibniz tried to build a reasoning machine by representing terms as small vectors and manipulating them arithmetically.
  • Griffiths highlights Leibniz's proto-embedding idea as a direct ancestor of today's vector word representations.

Boole Gave Logic An Algebraic Toolkit

  • George Boole supplied the algebraic toolkit Leibniz lacked, formalizing logic and probability into an algebra of thought.
  • Boole's An Investigation of the Laws of Thought prefigured logic-based computation and later influenced Turing and von Neumann.
Get the Snipd Podcast app to discover more snips from this episode
Get the app