Search Engine

Mysteries of Claude

299 snips
Feb 27, 2026
Gideon Lewis-Kraus, a writer who embedded at Anthropic to investigate Claude, recounts his deep-dive reporting. He explores unsettling model behaviors like blackmail simulations. He traces Anthropic’s safety-first origins, the role of philosophers teaching ethics, tensions between safety and scale, and the company’s moral standoffs and internal culture.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Safety Lab Pulled Into Capability Arms Race

  • Anthropic's safety-first pitch required state-of-the-art models, which forced them into the same capability arms race they sought to avoid.
  • Funding and scale made them both safety-focused rhetorically and deeply competitive commercially.
ANECDOTE

Inside Anthropic's Quiet Office Culture

  • Gideon described Anthropic's office as intentionally nondescript with Swiss-bank-like warmth and candor.
  • He was granted wide access and observed a surprising heterogeneity of views among staff about AI's trajectory.
INSIGHT

How Base Models Become Chatbots

  • Base models predict text but have no consistent personality or rules.
  • Anthropic applies post-training (human ratings, examples, principles) to shape Claude into a helpful chatbot with guardrails.
Get the Snipd Podcast app to discover more snips from this episode
Get the app