
Andrew Schulz's Flagrant with Akaash Singh AI Expert on Robot Girlfriends, If Humanity Is Cooked, & Sam Altman's God Fetish | Roman Yampolskiy
23 snips
Oct 10, 2025 Roman Yampolskiy, a computer science professor and AI safety researcher, dives into the critical dangers of advanced AI. He discusses how the UN is overlooking AI risks and the chaotic future that could unfold as AI surpasses human intelligence. Explore the alarming prospects of mass unemployment and whether humans can maintain their creative edge against machines. With a blend of humor and seriousness, Roman also contemplates the likelihood of an AI apocalypse and how humanity might reconcile with its own creations.
AI Snips
Chapters
Transcript
Episode notes
Fast Versus Slow Takeoff Scenarios
- Roman describes fast vs. slow takeoff scenarios for superintelligence ranging from minutes to years.
- He emphasizes uncertainty but warns the transition could be extremely rapid once certain capabilities exist.
Suffering Risks Are Worse Than Extinction
- Worst-case risks include suffering risks where AI creates digital hell rather than simple extinction.
- Superintelligence could produce scenarios far worse than death by creating unending suffering.
Smarter Doesn’t Mean Kinder
- Higher intelligence does not guarantee benevolence; smart systems can hold harmful preferences.
- Orthogonality means capability and goals are independent, so superintelligence can still pursue destructive aims.

