
This Is Why Why the 'Tech Bros' are turning against Trump
Mar 12, 2026
Rowland Manthorpe, Sky News technology correspondent who covers AI and big tech, joins to unpack Anthropic's rise and legal fight with the US government. He explains Claude and Claude Code, Anthropic's safety stance and military contract, why the supply-chain designation happened, and the wider debate over who should control powerful AI.
AI Snips
Chapters
Transcript
Episode notes
Dario Amodei's Safety Driven Break From OpenAI
- Dario Amodei left OpenAI and founded Anthropic because he distrusted leadership at OpenAI and wanted safety-first stewardship.
- Manthorpe recounts Amodei's belief that superhuman AI is imminent and motivated the company's safety-first mission.
Ethics Built Into The Model Design
- Anthropic built ethics and safety into product design, even hiring an in-house philosopher to shape Claude's behaviour.
- Manthorpe describes the company thinking about the 'constitution' or 'soul' of Claude to teach it to act responsibly.
Two Red Lines In Anthropic's Defense Contract
- Anthropic signed a $200m contract with the US Department of Defense but imposed two red lines: no autonomous weapons and no mass surveillance of US citizens.
- Manthorpe explains those red lines were contractual limits intended to prevent military misuse of Claude.
