
Robert Wright's Nonzero Averting a Superintelligence Takeover (Robert Wright & Andrea Miotti)
7 snips
Mar 30, 2026 Andrea Miotti, founder and CEO of Control AI, works to prevent and govern dangerous AI and superintelligence. She defines superintelligence and how AGI might rapidly become vastly more capable. They explore where the nuclear analogy helps and fails. Andrea outlines detectable signs, inspection priorities, and a bottom-up strategy to mobilize governments amid rising geopolitical pressures.
AI Snips
Chapters
Transcript
Episode notes
How Control AI Defines Superintelligence
- Superintelligence means AI vastly more competent than individuals and groups.
- Andrea Miotti points to company goals (Meta Superintelligence, OpenAI, Anthropic) aiming at systems that outperform companies or governments across many tasks.
Where The Nuclear Analogy Breaks Down
- The nuclear analogy has limits because AI is less conspicuous and easier to distribute than reactors or bombs.
- Robert Wright notes powerful models could be downloaded or run widely, unlike large, detectable nuclear plants.
Prohibit Superintelligence And Monitor Key Precursors
- Prioritize a clear prohibition on developing superintelligence and monitor precursors.
- Andrea Miotti recommends outlawing intentional superintelligence, watching compute scale, automated AI R&D, and containment-breaching hacking capabilities.

