
David Duvenaud
Professor of computer science at the University of Toronto and former lead of the alignment evals team at Anthropic, co-author of the paper 'Gradual disempowerment' exploring how aligned AI could still undermine human control and democratic institutions.
Top 3 podcasts with David Duvenaud
Ranked by the Snipd community

153 snips
Jan 27, 2026 • 2h 32min
#234 – David Duvenaud on why 'aligned AI' would still kill democracy
David Duvenaud, a University of Toronto CS professor and ex-lead of Anthropic's alignment evals, discusses the 'gradual disempowerment' thesis. He explores how AI could make people economically and politically irrelevant. They cover cultural shifts as machines shape norms, who controls powerful AIs, and whether liberal democracy can survive when humans are no longer 'needed'.

81 snips
Dec 23, 2025 • 1h 19min
How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)
David Duvenaud, an associate professor at the University of Toronto, dives into the concept of gradual disempowerment in a post-AGI world. He discusses how slow institutional shifts could erode human power while appearing normal. The conversation covers cultural shifts towards AI, the risks of obsolete labor, and the erosion of property rights. Duvenaud also highlights the complexities of aligning AI with human values and the potential for misaligned governance if humans become unnecessary. Engaging and thought-provoking, he tackles the future of human-AI relationships.

5 snips
Oct 6, 2025 • 1h 2min
David Duvenaud on the Cruxes and Possibilities of Post AGI Futures
David Duvenaud, an Associate Professor at the University of Toronto and former Anthropic researcher, delves into the complexities of post-AGI futures. He discusses the implications of his paper, Gradual Disempowerment, arguing that liberalism may falter if humans become obsolete. UBI's potential pitfalls and gamification are highlighted, alongside the need for resilient institutions that align with human values. The conversation also touches on asymmetrical human-AI relationships, forecasting challenges, and the concept of futarchy as a governance model, sparking hope amid uncertainty.


