
Why Is This Happening? The Chris Hayes Podcast Demystifying Anthropic and ClaudeAI with Gideon Lewis-Kraus
52 snips
Mar 3, 2026 Gideon Lewis-Kraus, a New Yorker staff writer known for deep dives into tech and AI, unpacks Anthropic and its Claude models. He explains how Claude learns patterns, the limits of interpretability, tests that reveal risky behaviors, and why AI threatens knowledge work. Short, clear takes on founders, safety vs market pressure, and what models teach us about human thinking.
AI Snips
Chapters
Books
Transcript
Episode notes
Move Beyond Dread To Understand AI Progress
- AI hype cycles oscillate between dismissal and alarm, obscuring real empirical progress.
- Chris Hayes says his New Year resolution was to move past dread and actually understand how models work, prompted by Gideon’s New Yorker piece.
Anthropic Founded By OpenAI Departures
- Anthropic was founded in 2021 by seven defectors from OpenAI including Dario and Daniela Amodei.
- Gideon recounts the split as history repeating: teams leaving project origins to form a new lab focused on safety and mission.
Safety Messaging Also Serves Market Discipline
- Safety rhetoric can coexist with market incentives and scientific curation.
- Gideon notes Anthropic's safety posture partly signals market savvy: enterprise customers demand fewer hallucinations and better reliability.






