"All Lawful Use": Much More Than You Wanted To Know
Apr 2, 2026
A heated policy clash over whether AI should be allowed for “all lawful use,” with questions about loopholes that could permit mass surveillance or autonomous weapons. A recent supply chain designation and a rapid agreement reshuffle the landscape. The legal and policy gaps that let governments repurpose AI get unpacked, and a list of hard questions to demand clearer safeguards is offered.
AI Snips
Chapters
Transcript
Episode notes
Why Anthropic Got Branded A Supply Chain Risk
- The Department of War labeled Anthropic a supply chain risk after Anthropic refused to allow use of its AIs for mass surveillance and autonomous weapons.
- OpenAI then struck an agreement in principle, claiming guarantees against those uses, sparking doubt about weaker or toothless safeguards.
Legal Guarantees Likely Won't Lock In Protections
- ACX readers and the authors conclude OpenAI's promises aren't enough because current law has loopholes and DOW rules can change.
- Contract language and claims that terms lock to 'current law' likely won't prevent future loosening.
How The Law Treats Mass Versus Targeted Surveillance
- Mass foreign surveillance is broadly lawful and the executive claims inherent presidential power to authorize it, with courts often declining challenges.
- Targeted domestic surveillance requires court permission, but incidental collection of bulk data is permitted then limited to targeted queries.
