
ICYMI Anthropic Isn't Woke
Mar 7, 2026
Tony Ho Tran, Slate editor who covers tech and culture, explains the Anthropic moment. They unpack celebrity-fueled chatbot hype, Anthropic’s safety-first branding and its $1.5B copyright settlement. Conversation covers the Pentagon dispute over surveillance and weapons, political spin calling the company "woke," and whether any AI firm can truly be ethical.
AI Snips
Chapters
Transcript
Episode notes
Authors Sued Anthropic Over Training Data
- Anthropic settled a $1.5 billion class-action lawsuit from authors who alleged their books were used without permission to train Claude.
- The judge called the training "fair use" but found illegal acquisition of 7 million digitized books led to the settlement.
How Anthropic Manufactured A Safety-First Image
- Anthropic built a "safety-first" brand by repeatedly saying it and using vibe-driven marketing like dark academia pop-ups and anti-ad ad campaigns.
- That branding helped Claude attract white-collar users and appear "woke" despite competing in the same risky AI race.
Anthropic's Two Red Lines With The Pentagon
- Anthropic refused Pentagon contract language that would allow mass domestic surveillance and fully autonomous weapons, citing current AI unreliability.
- CEO Dario Amodei framed it as technology not being ready for such high-stakes uses without human oversight.

