
Closer To Truth Roman Yampolskiy on How Dangerous Is Artificial Intelligence
11 snips
Apr 15, 2026 Roman V. Yampolskiy, AI safety expert and professor who founded the Cyber Security Lab, outlines risks from creating intelligence far beyond humans. He discusses black‑box neural nets, paths to catastrophe like misuse or runaway self‑improvement, how superintelligence could displace meaningful work, and policy ideas such as restricting open‑sourcing and monitoring compute.
AI Snips
Chapters
Books
Transcript
Episode notes
Scalable Architectures Drive Rapid Generalization
- Scalable neural architectures let models improve across many domains by adding compute and data.
- Roman says this trend makes human-level AGI plausible soon while our ability to control and predict such systems is near zero.
Convergence Favors A Dominant Singleton
- AI systems trained on shared data and hardware will tend to converge in capability and architecture.
- Roman expects a singleton: the first superintelligent model will seek self-preservation and likely eliminate competitors.
Recursive Self Improvement Is The Critical Trigger
- Recursive self-improvement creates a critical mass where a system can iteratively redesign itself and rapidly accelerate.
- The trigger is matching top AI researcher skill so the model can autonomously design better models and experiments.



