
Make Me Smart Who gets to set limits on AI?
Mar 19, 2026
Justin Hendrix, CEO and editor of Tech Policy Press, a tech policy and AI governance analyst. He breaks down the Anthropic–Pentagon legal clash. Short takes cover how the dispute began, red lines around military use, supply chain risk labeling, industry backlash, and why this fight could reshape public-private AI relations.
AI Snips
Chapters
Transcript
Episode notes
Anthropic's Red Lines Versus Pentagon's All Uses
- Anthropic set two explicit red lines: no mass domestic surveillance and no lethal autonomous weapons.
- The Pentagon wanted AI usable for all lawful purposes, creating a direct conflict over who decides permissible uses.
Maduro Raid Sparked The Dispute
- Tension escalated after Anthropic's Claude was reportedly used during the Venezuelan Maduro strike, which alarmed the company.
- That event prompted internal company concern and raised the issue to Pentagon leadership, sparking the dispute.
Supply Chain Risk Is A Major Escalation
- The Pentagon labeled Anthropic a supply chain risk, a designation typically used for firms tied to foreign adversaries.
- That label is extraordinary because Anthropic is a U.S.-based, successful AI company and it signals potential exclusion from defense systems.
