Jack Neel Podcast

AI Safety Expert: Humanity’s Last Invention—99.99% Chance of Extinction

44 snips
Dec 30, 2025
Dr. Roman Yampolskiy, an AI safety researcher and professor, shares alarming insights on the existential risks of advanced AI. He argues humanity faces a 99.99% chance of extinction due to superintelligence, exploring the dangers of recursive self-improvement and the incentives driving AI development. Roman highlights current societal shifts caused by AI, from job automation to moral challenges. He emphasizes the importance of regulation and warns against the perils of pursuing AGI too hastily, all while maintaining a surprisingly calm outlook on our future.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

No Scalable Safety Solution Exists Yet

  • Roman disagrees with safety community optimism and claims safety mechanisms that scale don't exist.
  • He views controlling a million‑times smarter agent as impossible, so stopping development is the only surefire option.
ADVICE

Insist On Peer‑Reviewed Scalable Safety

  • Roman says he'd lower his doom estimate only if a peer‑reviewed, community‑accepted, scalable safety mechanism is published.
  • Demand rigorous, public, verifiable solutions before trusting AGI deployment.
ANECDOTE

Family Reaction To Apocalypse Claims

  • Roman says his wife dismisses his doomsday views and tells him it's 'BS' while life goes on.
  • He notes many people habitually ignore existential threats like aging or extinction.
Get the Snipd Podcast app to discover more snips from this episode
Get the app