
Front Burner Iran and AI on the battlefield
26 snips
Mar 6, 2026 Heidy Khlaaf, Chief AI Scientist at the AI Now Institute and former OpenAI researcher on military risks, discusses AI’s role in modern conflict. She breaks down how LLMs are used for targeting, surveillance, and decision pipelines. Conversations cover company-military ties, accountability gaps, automation bias, and how AI reshapes state power in warfare.
AI Snips
Chapters
Transcript
Episode notes
Lavender Example Showed Mass Target Generation
- Israeli systems like Lavender and Gospel used non-LLM AI to identify tens of thousands of alleged targets, later supplemented by GPT‑4 and Gemini.
- Khlaaf notes those tools generated and validated targets en masse after cloud contracts expanded post-October 7.
Speedy LLM Targeting Is A High-Tech Carpet Bombing
- Rapid LLM-driven targeting functions like a modern carpet bombing by generating many potential targets quickly rather than precise selections.
- Khlaaf argues speed plus inaccuracy normalizes strikes and undermines legal wartime accountability.
Do Not Let Automation Bias Replace Human Judgment
- Avoid relying on decision support outputs without robust human checks because automation bias leads operators to blindly accept AI recommendations.
- Khlaaf cites decades of research showing humans tend to trust algorithmic outputs and rubber stamp them in practice.
