
Risky Bulletin Srsly Risky Biz: Is Claude too woke for war?
Feb 26, 2026
A sparring match over an AI assistant's safeguards versus military demands sparks debate about surveillance and lethal autonomous weapons. The trade-offs between value-driven AI and a warrior ethos get unpacked. A persistent Chinese hacking group, Volt Typhoon, is still lurking in critical infrastructure, raising alarms about premature victory claims and the need for sustained private-sector vigilance.
AI Snips
Chapters
Transcript
Episode notes
Clash Over Whether Claude Is A Tool Or An Entity
- The Pentagon treats Claude as a deployable tool while Anthropic treats Claude as an entity that must be trained with values.
- Tom Uren contrasts the Department of Defense's instrument view with Anthropic's Claude constitution used in training to shape behavior.
Anthropic Draws Clear Prohibitions For Claude
- Anthropic restricts Claude from uses like mass surveillance of Americans and lethal autonomous weapons as part of its safety stance.
- Dario Amodei framed those prohibitions alongside the Claude constitution being baked into training to shape behavior.
Military AI Requires A Warrior Ethos
- Military AI needs a 'warrior ethos' to encode trade-offs like prioritizing service members, civilians, and collateral damage.
- Uren notes such trade-offs differ from general-purpose model training and may require bespoke rules for fast scenarios like hypersonic attacks.
