
Don't Worry About the Vase Podcast On Dwarkesh Patel's Second Interview With Ilya Sutskever
Dec 3, 2025
In this enlightening conversation, Ilya Sutskever, co-founder and chief scientist of OpenAI, delves into the intricate world of AI and deep learning. He shares insights on why models perform well on benchmarks yet struggle in real-world applications, framing emotions as key value signals. Sutskever discusses the importance of continual learning post-deployment and the challenges in aligning AI with human values. He even speculates on the timelines for achieving superhuman learners, painting a picture of both potential and uncertainty in our AI-driven future.
AI Snips
Chapters
Transcript
Episode notes
Ship Early And Iterate From Deployment
- Deploy incrementally and let systems learn from real-world use to uncover emergent capabilities.
- Ship early and iterate because real deployment reveals behaviors unseen in labs.
Rapid Growth Hinges On Governance
- Ilya expects rapid economic growth from advanced learners, but the details depend heavily on governance and rules.
- He envisions humans remaining in charge if alignment succeeds, though that outcome is uncertain.
Capping Superintelligence Is Unsolved
- Ilya wants the most powerful superintelligence to be capped but admits he does not know how to implement such caps.
- Capping powerful systems is conceptually appealing but practically unresolved.

