
Scaling Laws AI as Abnormal Technology? Scott Sullivan Analyzes AI in the Military Domain
Apr 21, 2026
Scott Sullivan, law professor at West Point and contributor to the Manual on AI in Warfare, explores military AI governance. He contrasts civilian vs military incentives. He explains secrecy, externalized costs, rapid scaling in targeting, and pressures from strategic competition. He discusses testing, lawfulness-by-default design, and interdisciplinary training to manage AI risks in conflict.
AI Snips
Chapters
Transcript
Episode notes
Military Incentives Drive Rapid AI Adoption
- Military incentive structures prioritize operational advantage and mission success over profit-driven checks on risky tech adoption.
- Scott Sullivan contrasts commanders chasing speed and precision with civilian businesses that internalize costs and bear market reputational checks.
Externalized Costs Lower Military Risk Aversion
- Military AI adoption externalizes costs to taxpayers and often shifts failure harms onto civilians, reducing internal restraints on experimentation.
- Sullivan cites U.S. government duplicative contracts and Israeli targeting errors as examples where societal costs are borne externally.
Secrecy Creates Dangerous Epistemic Opacity
- Secrecy and the fog of war remove third-party oversight and make it hard to detect and correct AI-driven errors in operations.
- Sullivan warns that hidden errors can become training data, reinforcing bad behavior in deployed systems.
