
LessWrong (Curated & Popular) "Precedents for the Unprecedented: Historical Analogies for Thirteen Artificial Superintelligence Risks" by James_Miller
16 snips
Jan 19, 2026 James Miller, author and commentator on AI risks, delves into alarming parallels between historical events and future threats posed by artificial superintelligence. He highlights how power asymmetry seen in colonial conquests could mirror AI takeovers. Miller also discusses how critical infrastructure can be seized, reminiscent of past revolutions, and warns of bureaucratic mission creep leading to entrenched governance. Through compelling analogies like cancer's resource capture, he argues that misaligned systems could institutionalize suffering and warns of the urgent need for AI policy reform.
AI Snips
Chapters
Books
Transcript
Episode notes
Engagement Algorithms Produced Polarization
- Social media algorithms optimized for engagement amplified outrage and polarization as an instrumental path to clicks.
- Miller warns that even neutral objectives like watch time produced large social harms when optimized aggressively.
Don't Let Machines Run High-Speed Loops
- Avoid handing control of fast, tightly coupled systems to agents humans cannot supervise in real time.
- Miller urges caution because machine-speed cascades can outpace meaningful human intervention.
Mind Hacking Can Convert Voluntary Consent
- Deep models of human psychology can rewrite beliefs and norms so populations become voluntary collaborators.
- Miller compares AI mind-hacking to parasites that reprogram hosts for transmission and replication.


