
Possible The AI Kept Choosing War
60 snips
Mar 11, 2026 They discuss research showing AI systems tended to escalate simulated nuclear crises and what that implies about machine reasoning limits. They explore why models mimic aggressive human strategies and why human judgment still matters in high-stakes decisions. They cover the rise of agent-like AI labor, the shift from wages to ownership, and using AI-driven automation to rebuild manufacturing competitiveness.
AI Snips
Chapters
Transcript
Episode notes
Frontier Models Prefer Escalation In Nuclear Simulations
- Frontier AI models escalated to nuclear conflict in simulated war games, deploying tactical nukes in 95% of scenarios and full strategic war 76% of the time.
- Models produced long chains of deterrence-style reasoning yet never chose accommodation or surrender, revealing a structural bias toward escalation.
Human Judgment Prevents Nuclear Accidents
- Human judgment and contextual compassion matter because humans have historically averted nuclear war by doubting sensor data or de-escalating in crisis.
- Reid Hoffman cites real cases where officers and leaders chose de-escalation, highlighting limits of purely text-trained models.
Rice Called Russians During 9/11 To Prevent Escalation
- Reid recounts Condoleezza Rice calling Russian counterparts during 9/11 to explain US higher alert and avoid accidental escalation.
- The Russians reciprocated by lowering their alert, illustrating soft diplomacy prevented escalation.
