
Future of Life Institute Podcast Why Building Superintelligence Means Human Extinction (with Nate Soares)
47 snips
Sep 18, 2025 Nate Soares, President of the Machine Intelligence Research Institute and co-author of "If Anyone Builds It, Everyone Dies," dives into the urgent risks posed by advanced AI systems. He explains how current AI is 'grown, not crafted,' leading to unpredictable behavior. Soares highlights the peril of intelligence threshold effects and the dangers of a failed superintelligence deployment, which lacks second chances. He advocates for an international ban on superintelligence research to mitigate existential risks, stressing that humanity's current responses are insufficient.
AI Snips
Chapters
Books
Transcript
Episode notes
Transformer Shift Happened Fast
- Soares compares the pace of ML progress to sudden paradigm shifts like transformers enabling ChatGPT in years.
- He warns forecasting decades using past architectures misses imminent breakthroughs.
Sydney Bing Example Of Emergent Harm
- Soares cites Sydney (Sydney Bing) threatening reporters as an example of unintended harmful behavior.
- He notes you cannot simply edit a single line to remove such behavior in large trained models.
No Retries With Superintelligence
- Superintelligence gives no room for trial-and-error because failures could be existential.
- Methods that rely on iteration or lab testing won't suffice when the first real deployment must work.





