
London Real Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning for Humanity
Jul 19, 2024
Dr Roman Yampolskiy discusses the risks of AI superintelligence surpassing humanity, the dangers of losing control over technology smarter than humans, and the potential benefits and drawbacks of universal basic income. The conversation delves into the challenges of programming ethics and human values into AI systems, the risks posed by superintelligent AI, and the evolution of AI safety research over time.
AI Snips
Chapters
Transcript
Episode notes
Students Question AI Career Impact
- University students express curiosity but confusion about AI's impact on careers.
- They question the value of learning obsolete skills as AI outperforms many professions.
AI Accidents Fail to Drive Safety
- AI accidents so far have not prompted meaningful safety improvements.
- Past failures normalize AI risks and reduce urgency to regulate dangerous systems.
Competitive AI Development Risks
- Prisoner's dilemma drives AI labs to compete in advancing capability dangerously close to superintelligence.
- Economic pressures prevent collective restraint, risking severe outcomes if one crosses the line.

