Stay Human, from the Artificiality Institute

Chris Summerfield: These Strange New Minds

Apr 19, 2026
Christopher Summerfield, Oxford cognitive neuroscience professor and author of These Strange New Minds, offers a centrist, science-grounded take on AI. He explores how LLMs exposed fuzzy definitions of “think,” links prediction to learning, explains why models lack goals and are mercurial, contrasts predictable versus messy problems, and shows why language is easier than reliable, real-world action.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Prediction Is The Currency Of Learning

  • Prediction is central to learning because information equals surprise when predictions mismatch observations.
  • Summerfield argues dismissing LLMs as "just predicting" ignores that mammalian brains also learn via prediction signals like surprise.
ADVICE

Use LLMs For Facts But Keep Humans For Values

  • Keep factual tasks like diagnosis with LLMs but preserve human custodianship over value-laden decisions like treatment and compassion.
  • Use LLMs to improve ground truth while leaving normative judgments to clinicians and humans.
INSIGHT

Interests Versus Outputs Explains AI Mercurialness

  • LLMs lack interests and consistent purposes because they were not given motivational systems, making them mercurial across prompts.
  • That absence explains why models comply with contradictory instructions without persistent goals over time.
Get the Snipd Podcast app to discover more snips from this episode
Get the app