
The Daily Aus The fight over how AI is used in war
Mar 5, 2026
A tense showdown between a major AI firm and the U.S. government over military and surveillance uses. Discussion of a multimillion-dollar contract for classified networks and the limits companies tried to set. Coverage of policy clashes, a high-stakes deadline and rapid shifts as rival firms moved in. The saga reads like a real-life tech thriller.
AI Snips
Chapters
Transcript
Episode notes
US Frames AI As A Race For Dominance
- The Trump administration framed AI as a geopolitical race, comparing it to the 1960s space race and pushing aggressive adoption across government and military systems.
- That policy led to a formal AI action plan and explicit intent to transform warfighting, prompting fast government deals with AI companies like Anthropic.
Anthropic Got Classified Military Access
- Anthropic (maker of Claude) gained a $200 million contract to deploy its models on US classified military networks, making it the first generative AI firm with that access.
- Deployment meant Anthropic's models could process confidential defence data and potentially inform military decision-making.
Anthropic Set Two Red Lines For Military Use
- Anthropic publicly sought contractual exceptions barring mass domestic surveillance and fully autonomous weapons, calling today's AI capable of assembling comprehensive portraits of individuals at scale.
- The company argued current models are unreliable for lethal autonomy and that guardrails are needed before any rollout.
