
Robert Wright's Nonzero Is Anthropic Misanthropic? (Robert Wright & Holly Elmore)
5 snips
Feb 20, 2026 Holly Elmore, executive director of PauseAI US and AI policy advocate who pushes for slowing rapid AI deployment. She critiques Anthropic's persona-focused fixes versus true base-model alignment. Conversations cover naming-and-shaming tactics, Dario Amodei's role in accelerating the race, and how winner-takes-all narratives and geopolitical frames justify risky rushes.
AI Snips
Chapters
Transcript
Episode notes
Persona-Tuning Isn't Deep Alignment
- Anthropic's 'alignment' work focuses on shaping Claude's persona and prompts rather than deeply changing base model weights.
- Holly argues that persona-level fixes won't suffice for scaling AIs that must align future, more powerful models.
Pause As Common-Sense First Step
- Pause AI frames a pause as a simple, common-sense first step to address AI externalities before further scaling.
- Holly prioritizes existential risk while treating economic and mental-health harms as additional reasons to pause.
Moral PR Shields Dangerous Work
- Holly accuses Amanda Askell and others at Anthropic of knowingly advancing superintelligence despite understanding the risks.
- She sees persona PR (e.g., 'woman who gives Claude morals') as protective spin that shields Anthropic from scrutiny.

