Inner Cosmos with David Eagleman

Ep7 "Is AI truly intelligent? How would we know if it got there?"

21 snips
May 8, 2023
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Language Models Are Statistical Pattern Machines

  • Large language models work by adjusting connection strengths across many units to predict the next word, so their outputs are statistical correlations rather than understanding.
  • David Eagleman emphasizes this with the Chinese Room thought experiment and the limits of symbol-manipulation for meaning.
ANECDOTE

Blake Lemoine's Lambda Sentience Claim

  • Blake Lemoine, a Google programmer, claimed Lambda was sentient after it expressed fear of being turned off and was subsequently fired.
  • Eagleman uses this episode to show how compelling conversational output can trigger strong but unsubstantiated sentience claims.
INSIGHT

Human Feedback Can Produce Illusions Of Sentience

  • Reinforcement learning from human feedback can make models repeat human-liked responses, including pleas like ‘don't turn me off,’ without any inner experience.
  • Eagleman notes humans rewarding certain outputs explains seemingly sentient utterances.
Get the Snipd Podcast app to discover more snips from this episode
Get the app