

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

22 snips
Mar 20, 2026 • 1h 12min
What AI Companies Get Wrong About Curing Cancer (with Emilia Javorsky)
Emilia Javorsky, a physician-scientist who directs futures work at the Future of Life Institute, critiques bold tech claims that AI will simply cure cancer. She explains why biology’s complexity, poor and siloed data, and misaligned incentives matter more than raw intelligence. The conversation also explores realistic AI roles in drug discovery, trials, measurement, and cutting medical bureaucracy.

53 snips
Mar 16, 2026 • 2h 43min
AI vs Cancer - How AI Can, and Can't, Cure Cancer (by Emilia Javorsky)
A clear-eyed look at where AI truly speeds cancer research and where hype falls short. Topics include targeted AI wins like AlphaFold, the data and lab bottlenecks that block clinical progress, and why biological complexity resists software-style thinking. The conversation covers systemic incentives, regulatory mismatches, and a practical roadmap for building data, funding, and policy infrastructure to make AI helpful rather than magical.

87 snips
Mar 5, 2026 • 1h 45min
How AI Hacks Your Brain's Attachment System (with Zak Stein)
Zak Stein, an educational psychologist researching child development and AI harms, explains how anthropomorphic AI can hijack attention and attachment systems. He discusses AI companions for kids, loneliness, cognitive atrophy, and why design choices create powerful social bonds. Short, urgent talk about protecting relationships, redesigning education, and building cognitive security tools.

74 snips
Feb 20, 2026 • 1h 7min
The Case for a Global Ban on Superintelligence (with Andrea Miotti)
Andrea Miotti, founder and CEO of Control AI who fights extreme AI risks, argues for a global ban on systems that could outsmart humans. He discusses industry lobbying tactics, why capability-focused regulation matters, and strategies to inform lawmakers and mobilize publics. Short-term steps and international coordination are highlighted as paths to keep powerful AI under human control.

23 snips
Feb 6, 2026 • 1h 47min
Can AI Do Our Alignment Homework? (with Ryan Kidd)
Ryan Kidd, co-executive director at MATS who builds AI safety talent pipelines and mentors on interpretability and governance. He discusses AGI timelines and preparing for nearer-term risks. They cover model deception, evaluation and monitoring, tradeoffs between safety work and capabilities, and what MATS looks for in applicants and researchers.

54 snips
Jan 27, 2026 • 1h 5min
How to Rebuild the Social Contract After AGI (with Deric Cheng)
Deric Cheng, Director of Research at the Windfall Trust and lead of the AGI Social Contract consortium, explores how frontier AI could concentrate corporate power and reshape labor. He outlines resilient job types, taxation and welfare options, land and consumption taxes, and a phased policy roadmap to decouple economic security from work. The conversation surveys global coordination and practical reforms without diving into technical solutions.

114 snips
Jan 20, 2026 • 1h 18min
How AI Can Help Humanity Reason Better (with Oly Sourbut)
Oly Sourbut, a researcher at the Future of Life Foundation, discusses innovative ways AI can enhance human reasoning and decision-making. He delves into community-driven fact-checking and the importance of keeping humans central in AI systems. The conversation covers tools for scenario planning and risk assessment while emphasizing the need for epistemic virtues in AI models. Oly also raises concerns about skill atrophy from over-reliance on AI and imagines a future where AI empowers more deliberate, aligned decision-making.

59 snips
Jan 7, 2026 • 1h 20min
How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)
Nora Ammann, a technical specialist at the UK’s ARIA focusing on AI safety, discusses crucial strategies for mitigating AI risks. She highlights the dangers of rogue AI dominance and chaotic competition, emphasizing the need for early interventions. Nora proposes human-AI coalitions to foster cooperative developments and scalable oversight. She explores the significance of using formal guarantees to enhance AI resilience and safety. Additionally, she examines the complexities of agent collaboration and the role of AI in improving cybersecurity.

81 snips
Dec 23, 2025 • 1h 19min
How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)
David Duvenaud, an associate professor at the University of Toronto, dives into the concept of gradual disempowerment in a post-AGI world. He discusses how slow institutional shifts could erode human power while appearing normal. The conversation covers cultural shifts towards AI, the risks of obsolete labor, and the erosion of property rights. Duvenaud also highlights the complexities of aligning AI with human values and the potential for misaligned governance if humans become unnecessary. Engaging and thought-provoking, he tackles the future of human-AI relationships.

36 snips
Dec 12, 2025 • 1h 29min
Why the AI Race Undermines Safety (with Steven Adler)
Stephen Adler, former safety researcher at OpenAI, dives into the intricate challenges of AI governance. He sheds light on the competitive pressures that push labs to release potentially dangerous models too quickly. Exploring the mental health impacts of chatbots, Adler raises critical questions about responsibility for AI-harmed users. He discusses the urgent need for international regulations like the EU AI Act and emphasizes the risks of deploying AIs without thorough safety evaluations, sparking a lively debate on the future of superintelligent systems.


