
The Take How is the US using Anthropic's Claude AI in Iran?
19 snips
Mar 6, 2026 Heidy Khlaaf, Principal Research Scientist at the AI Now Institute focused on AI safety in critical systems. She traces how Anthropic and OpenAI tools are funneled into military decision support. Short takes cover LLM roles in targeting, risks like hallucinations and automation bias, company ties to defense, and debates over meaningful oversight versus PR safety claims.
AI Snips
Chapters
Transcript
Episode notes
LLMs Are Already Embedded In Military Decision Systems
- Frontier AI models have been normalized for military use through deals between companies like Anthropic and defense contractors such as Palantir.
- These models are already being embedded in decision support systems that prioritize targets using satellite, social media, and intercepted data.
LLM Accuracy Problems Make Them Unsuitable For Fog Of War
- Large language models hallucinate frequently and can be as low as ~50% accurate, making them unreliable under the fog of war.
- Heidy Khlaaf warns these models struggle with novel, uncertain situations and shouldn't be trusted for life-or-death military decisions.
AI Decision Support Can Produce Specific Targeting Recommendations
- Decision support systems fed by AI synthesize imagery, social media, and intercepted communications to produce targeting recommendations.
- Those recommendations can specify infrastructure, hospitals, schools, or individuals and suggest weapons like missiles or drones.
