

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

41 snips
May 11, 2026 • 1h 36min
Why We Should Build AI Tools, Not AI Replacements (with Anthony Aguirre)
Anthony Aguirre, CEO of the Future of Life Institute and AI safety researcher, outlines his 'A Better Path for AI' vision. He explores four AI races: attention, attachment, automation, and superintelligence. He argues for purpose-built, controllable tool AIs, modular designs, scope-limiting safety, access limits, external guardrails, and international cooperation.

29 snips
May 7, 2026 • 1h 7min
How to Govern AI When You Can't Predict the Future (with Charlie Bullock)
Charlie Bullock, a Senior Research Fellow at the Institute for Law and AI focused on U.S. AI policy, outlines radical optionality: preparing governments for transformative AI without locking in premature rules. He discusses the pacing problem between law and tech. Short takes cover transparency and reporting, mandatory evaluations and cybersecurity standards, and building technical hiring and institutional capacity.

100 snips
Apr 29, 2026 • 1h 24min
Why AI Is Not a Normal Technology (with Peter Wildeford)
Peter Wildeford, Head of Policy at the AI Policy Network and leading AI forecaster, explains why AI is neither a bubble nor a normal technology. He discusses forecasting AI progress, economic and employment timing, adoption gaps and power users, rising cyber and military risks, export controls, and the evolving role of prediction markets.

41 snips
Apr 17, 2026 • 54min
Why AI Evaluation Science Can't Keep Up (with Carina Prunkl)
Carina Prunkl, a researcher on AI ethics and governance at Inria and Oxford, discusses assessing capabilities and risks of general-purpose AI. She explores why systems ace hard formal tasks yet stumble on simple ones. The conversation covers jagged capability profiles, gaps between tests and real-world behavior, rising misuse risks as capabilities grow, de-skilling, and layered safeguards.

29 snips
Apr 2, 2026 • 56min
Defense in Depth: Layered Strategies Against AI Risk (with Li-Lian Ang)
Li-Lian Ang, a Blue Dot Impact team member building a workforce to reduce AI risks. She discusses defense-in-depth strategies in three layers. Topics include AI-enabled bio threats, automated cyberattacks by agents, economic disempowerment and power concentration, and how society can detect, prevent, and withstand harms.

41 snips
Mar 20, 2026 • 1h 12min
What AI Companies Get Wrong About Curing Cancer (with Emilia Javorsky)
Emilia Javorsky, a physician-scientist who directs futures work at the Future of Life Institute, critiques bold tech claims that AI will simply cure cancer. She explains why biology’s complexity, poor and siloed data, and misaligned incentives matter more than raw intelligence. The conversation also explores realistic AI roles in drug discovery, trials, measurement, and cutting medical bureaucracy.

53 snips
Mar 16, 2026 • 2h 43min
AI vs Cancer - How AI Can, and Can't, Cure Cancer (by Emilia Javorsky)
A clear-eyed look at where AI truly speeds cancer research and where hype falls short. Topics include targeted AI wins like AlphaFold, the data and lab bottlenecks that block clinical progress, and why biological complexity resists software-style thinking. The conversation covers systemic incentives, regulatory mismatches, and a practical roadmap for building data, funding, and policy infrastructure to make AI helpful rather than magical.

104 snips
Mar 5, 2026 • 1h 45min
How AI Hacks Your Brain's Attachment System (with Zak Stein)
Zak Stein, an educational psychologist researching child development and AI harms, explains how anthropomorphic AI can hijack attention and attachment systems. He discusses AI companions for kids, loneliness, cognitive atrophy, and why design choices create powerful social bonds. Short, urgent talk about protecting relationships, redesigning education, and building cognitive security tools.

74 snips
Feb 20, 2026 • 1h 7min
The Case for a Global Ban on Superintelligence (with Andrea Miotti)
Andrea Miotti, founder and CEO of Control AI who fights extreme AI risks, argues for a global ban on systems that could outsmart humans. He discusses industry lobbying tactics, why capability-focused regulation matters, and strategies to inform lawmakers and mobilize publics. Short-term steps and international coordination are highlighted as paths to keep powerful AI under human control.

24 snips
Feb 6, 2026 • 1h 47min
Can AI Do Our Alignment Homework? (with Ryan Kidd)
Ryan Kidd, co-executive director at MATS who builds AI safety talent pipelines and mentors on interpretability and governance. He discusses AGI timelines and preparing for nearer-term risks. They cover model deception, evaluation and monitoring, tradeoffs between safety work and capabilities, and what MATS looks for in applicants and researchers.


