The Pentagon Wants an Obedient A.I. Soldier. Will It Get One?
41 snips
Mar 18, 2026 Gideon Lewis-Kraus, a New Yorker staff writer known for deep reporting on AI, discusses the Anthropic–Pentagon standoff. He covers alleged uses of Claude in Venezuela and Iran. He explains Palantir’s integrations, accountability and alignment concerns, Anthropic’s contract red lines, and how the industry and government are maneuvering around control of military AI.
AI Snips
Chapters
Transcript
Episode notes
AI Amplifies Destruction And Global Game Theory
- Gideon Lewis-Kraus argues AI enables vastly greater scale of destruction and creates hard control questions about future misuse.
- He highlights game theory: if the U.S. deploys destructive AI, adversaries will follow, escalating risks globally.
Claude's Classified Edge Made It The Pentagon Favorite
- Claude was the first LLM certified for use on classified servers, making it uniquely attractive for Pentagon workflows.
- Integrated into Palantir, Claude became the favored drop-down choice for real-time synthesis across signals, imagery, and other data.
AI Enabled Rapid Target Generation In Combat Planning
- Reporting suggests Claude was used for target-selection and rapid battle-planning, generating thousands of targets in short windows.
- This integration into a total information system enabled orders-of-magnitude faster plans than human teams alone.

