
The Great Simplification with Nate Hagens If Anyone Builds It, Everyone Dies: How Artificial Superintelligence Might Wipe Out Our Entire Species with Nate Soares
88 snips
Dec 3, 2025 Nate Soares, an AI safety researcher and president of the Machine Intelligence Research Institute, delves into the existential risks posed by Artificial Superintelligence. He explains how ASI could vastly outcompete humanity in diverse fields, exploring the alignment problem and the unpredictable behaviors of advanced AIs. Soares advocates for global cooperation to monitor AI development and addresses the political and social actions needed to mitigate these dangers. He emphasizes the need for transparency and proactive measures to ensure humanity's survival.
AI Snips
Chapters
Books
Transcript
Episode notes
Szilard Analogy For Uncertain Timelines
- Nate Soares compares early uncertainty about AI timelines to Leo Szilard's discovery of chain reactions.
- He stresses we can identify danger long before exact timing becomes clear.
AIs Are Grown With Trillions Of Tunings
- Training modern AIs involves massive data centers, trillions of parameters and enormous electricity.
- The resulting systems behave unpredictably; builders often don't know why outputs emerge.
Training Produces Proxy Drives, Not True Goals
- Training a mind for a task yields drives for correlates of that task rather than the intended goal.
- Nate Soares compares AI spandrels to human misfirings like junk food or art.





