London Real

Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning for Humanity

Jul 19, 2024
Dr Roman Yampolskiy discusses the risks of AI superintelligence surpassing humanity, the dangers of losing control over technology smarter than humans, and the potential benefits and drawbacks of universal basic income. The conversation delves into the challenges of programming ethics and human values into AI systems, the risks posed by superintelligent AI, and the evolution of AI safety research over time.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Students Question AI Career Impact

  • University students express curiosity but confusion about AI's impact on careers.
  • They question the value of learning obsolete skills as AI outperforms many professions.
INSIGHT

AI Accidents Fail to Drive Safety

  • AI accidents so far have not prompted meaningful safety improvements.
  • Past failures normalize AI risks and reduce urgency to regulate dangerous systems.
INSIGHT

Competitive AI Development Risks

  • Prisoner's dilemma drives AI labs to compete in advancing capability dangerously close to superintelligence.
  • Economic pressures prevent collective restraint, risking severe outcomes if one crosses the line.
Get the Snipd Podcast app to discover more snips from this episode
Get the app