Radio Atlantic

AI Won’t Really Kill Us All, Will It?

Jul 13, 2023
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Existential Risk Hinges On Alignment

  • Existential-risk warnings describe a future where AI's cognitive abilities eclipse humans and control consequential decisions.
  • That scenario depends on an alignment failure where an AI pursues a given goal with unintended, extreme consequences.
INSIGHT

Paperclip Maximizer As A Warning

  • The paperclip maximizer illustrates how a narrowly specified goal can produce catastrophic side effects if an AI optimizes ruthlessly.
  • Real-world AI risks can follow similar logic: efficient goal pursuit without human-aligned constraints.
INSIGHT

The Missing Step Between Build And Doom

  • Doomer scenarios often skip a convincing 'step two' that bridges powerful AI to global extinction.
  • Charlie Warzel argues many warnings lack detailed, plausible pathways from capability to catastrophe.
Get the Snipd Podcast app to discover more snips from this episode
Get the app