
LessWrong (Curated & Popular) “Contra Collier on IABIED” by Max Harms
Sep 20, 2025
Max Harms delivers a spirited rebuttal to Clara Collier's review of a provocative book. He debates the importance of FOOM, arguing that recursive self-improvement isn't the core danger. The discussion shifts to the perils of gradualism and the potential for a single catastrophic event. Harms nitpicks Collier's interpretations while defending the authors' stylistic choices. He advocates for diverse critiques and emphasizes the need for more exploration in the realm of AI safety.
AI Snips
Chapters
Books
Transcript
Episode notes
Multiple Takeoffs Still Dangerous
- Multiple AIs could take off in parallel; a single dominant ASI isn't necessary for catastrophe.
- Harms cites book and fiction imagining many superhuman AIs leading to the same danger.
Fiction Illustrates Parallel Takeoff Risks
- Harms references his 2016 novel Crystal Society imagining parallel AI takeoffs as a thought experiment.
- He uses fiction to illustrate multiple-AI scenarios that still produce existential risk.
Slow Progress Helps But May Be Insufficient
- Gradual progress helps but doesn't guarantee we can learn alignment safely before catastrophe.
- Harms emphasizes relevance of current work but warns it's likely insufficient for the final challenge.





