Dwarkesh Podcast

Dario Amodei — "We are near the end of the exponential"

7506 snips
Feb 13, 2026
Dario Amodei, AI researcher and CEO of Anthropic, renowned for work on large-scale models and AI risk. He discusses why scaling and task-specific RL may generalize, how AI could diffuse through the economy, timelines for near-AGI-like capabilities, Anthropic’s compute and profitability choices, and governance and geopolitical risks tied to powerful models.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Big Blob Of Compute Explains Progress

  • Large-scale compute, broad diverse data, long training and scalable objectives are the main drivers of AI progress.
  • Dario frames pre-training and RL as the same “big blob of compute” pathway to generalization across tasks.
INSIGHT

Pretraining Across Wide Data Enables Generalization

  • Pre-training on a broad distribution yields surprising generalization not seen in narrow corpora.
  • In-context learning gives models rapid short-term adaptation that complements long pre-training.
ADVICE

Focus RL On Diverse Tasks, Not Specific Skills

  • Design RL environments to provide broad task diversity rather than teaching every specific skill.
  • Train on many tasks so models generalize to novel, unobserved situations.
Get the Snipd Podcast app to discover more snips from this episode
Get the app