Odd Lots

Anthropic, the Pentagon, and the Future of Autonomous Weapons

124 snips
Mar 28, 2026
Paul Scharre, former Army Ranger and ex-Pentagon official now at CNAS, dives into the Anthropic clash and what autonomous weapons really are. He gets into how AI helps find targets and plan strikes. The conversation explores when human oversight turns into rubber stamping, who should set the rules, and how drones, agents, and battlefield algorithms could change war.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

How The Pentagon Wrote Its First Autonomy Rules

  • Around 2011, Paul Scharre led the Pentagon effort that produced the autonomy-in-weapons policy still used today.
  • The work grew out of Iraq and Afghanistan, where thousands of drones and bomb-disposal robots sparked an “accidental robotics revolution.”
INSIGHT

Why The Pentagon Cannot Build Frontier AI Alone

  • AI puts the military in an unusual dependency because the frontier technology sits in commercial firms, not secret defense labs.
  • Scharre contrasts it with stealth; even a $200 million Anthropic contract is small because civilian AI markets attract more talent and capital than defense.
INSIGHT

The Real Fight Is Who Sets AI Rules

  • Scharre says the Anthropic dispute is less about deploying killer robots now than about who gets to set the rules for lawful military AI use.
  • The Pentagon wanted contracts allowing any lawful use, while labs keep their own restrictions on things like offensive cyber operations.
Get the Snipd Podcast app to discover more snips from this episode
Get the app