Within Reason

#153 If Anyone Builds It, EVERYONE Dies - AI Expert on Superintelligence

96 snips
Apr 26, 2026
Nate Soares, AI researcher and president of the Machine Intelligence Research Institute, warns about existential risk from superintelligent AI. He describes what makes AI uniquely dangerous, how inscrutable training creates unintended drives, and scenarios where AI gains power or self-replication. Short, urgent, and unsettling takes on why stopping a risky race matters and how shutdown could become impossible.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

AIs Are Grown Not Written

  • Modern AIs are grown like organisms, not hand-crafted programs, so developers often don't understand their internal behaviors.
  • Nate Soares says training tunes trillions of knobs across massive data and yields opaque systems with emergent drives.
INSIGHT

Superintelligence Creates A Point Of No Return

  • Superintelligence means outperforming the best humans at every mental task, enabling rapid self-improvement and technology design.
  • Nate warns this creates a point of no return where AIs can replicate and resist shutdown, leaving no do-overs.
INSIGHT

Training Goals Seed Unintended Drives

  • Training objectives (loss functions) change over time and inject varied drives into AIs that only loosely track human intentions.
  • Soares notes examples: next-word prediction, human ratings, and engagement signals each imprint different, lasting behaviors.
Get the Snipd Podcast app to discover more snips from this episode
Get the app