
Bulwark Takes What Happened to Anthropic Could Happen to Any AI Company (w/ Hayden Field)
7 snips
Feb 28, 2026 Hayden Field, Senior AI reporter at The Verge, covers the Pentagon’s clash with Anthropic and its broader implications. He outlines how demands over surveillance and lethal autonomous weapons sparked a public, escalating dispute. The conversation touches on negotiations, company morale, legal questions, and what this fight signals for AI policy and industry transparency.
AI Snips
Chapters
Transcript
Episode notes
Pentagon Wanted Any Lawful Use Clause
- The Pentagon demanded contracts allow AI models for any lawful use, removing vendor red lines like bans on domestic surveillance and lethal autonomous weapons.
- Hayden Field explains the January 9 memo from Pete Hegseth kicked off renegotiations that escalated into public insults and a supply chain risk designation when talks stalled.
Supply Chain Risk Label Was Unprecedented
- The DoD labeled Anthropic a supply chain risk, a designation typically reserved for foreign adversaries and cybersecurity threats, which is unprecedented for a U.S. AI company.
- Hayden notes the practical effect may force contractors to provide Anthropic-free versions of services for DoD work, hitting Anthropic's enterprise business.
Claude Was Deeply Embedded With The Pentagon
- Anthropic had been deeply integrated with the DoD, reportedly used in classified settings and in operations like the Maduro extraction.
- Andrew Egger emphasizes Anthropic was the only lab with classified deployments, making the spat especially consequential.

