
Jack Neel Podcast AI Safety Expert: Humanity’s Last Invention—99.99% Chance of Extinction
44 snips
Dec 30, 2025 Dr. Roman Yampolskiy, an AI safety researcher and professor, shares alarming insights on the existential risks of advanced AI. He argues humanity faces a 99.99% chance of extinction due to superintelligence, exploring the dangers of recursive self-improvement and the incentives driving AI development. Roman highlights current societal shifts caused by AI, from job automation to moral challenges. He emphasizes the importance of regulation and warns against the perils of pursuing AGI too hastily, all while maintaining a surprisingly calm outlook on our future.
AI Snips
Chapters
Transcript
Episode notes
No Scalable Safety Solution Exists Yet
- Roman disagrees with safety community optimism and claims safety mechanisms that scale don't exist.
- He views controlling a million‑times smarter agent as impossible, so stopping development is the only surefire option.
Insist On Peer‑Reviewed Scalable Safety
- Roman says he'd lower his doom estimate only if a peer‑reviewed, community‑accepted, scalable safety mechanism is published.
- Demand rigorous, public, verifiable solutions before trusting AGI deployment.
Family Reaction To Apocalypse Claims
- Roman says his wife dismisses his doomsday views and tells him it's 'BS' while life goes on.
- He notes many people habitually ignore existential threats like aging or extinction.

