

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

Apr 27, 2017 • 47min
Climate Change With Brian Toon And Kevin Trenberth
I recently visited the National Center for Atmospheric Research in Boulder, CO and met with climate scientists Dr. Kevin Trenberth and CU Boulder’s Dr. Brian Toon to have a different climate discussion: not about whether climate change is real, but about what it is, what its effects could be, and how can we prepare for the future.

Mar 31, 2017 • 58min
Law and Ethics of AI with Ryan Jenkins and Matt Scherer
The rise of artificial intelligence presents not only technical challenges, but important legal and ethical challenges for society, especially regarding machines like autonomous weapons and self-driving cars. To discuss these issues, I interviewed Matt Scherer and Ryan Jenkins. Matt is an attorney and legal scholar whose scholarship focuses on the intersection between law and artificial intelligence. Ryan is an assistant professor of philosophy and a senior fellow at the Ethics and Emerging Sciences group at California Polytechnic State, where he studies the ethics of technology.
In this podcast, we discuss accountability and transparency with autonomous systems, government regulation vs. self-regulation, fake news, and the future of autonomous systems.

Feb 28, 2017 • 41min
UN Nuclear Weapons Ban With Beatrice Fihn And Susi Snyder
Last October, the United Nations passed a historic resolution to begin negotiations on a treaty to ban nuclear weapons. Previous nuclear treaties have included the Test Ban Treaty, and the Non-Proliferation Treaty. But in the 70 plus years of the United Nations, the countries have yet to agree on a treaty to completely ban nuclear weapons. The negotiations will begin this March. To discuss the importance of this event, I interviewed Beatrice Fihn and Susi Snyder. Beatrice is the Executive Director of the International Campaign to Abolish Nuclear Weapons, also known as ICAN, where she is leading a global campaign consisting of about 450 NGOs working together to prohibit nuclear weapons. Susi is the Nuclear Disarmament Program Manager for PAX in the Netherlands, and the principal author of the Don’t Bank on the Bomb series. She is an International Steering Group member of ICAN.
(Edited by Tucker Davey.)

Jan 31, 2017 • 54min
AI Breakthroughs With Ian Goodfellow And Richard Mallah
2016 saw some significant AI developments. To talk about the AI progress of the last year, we turned to Richard Mallah and Ian Goodfellow. Richard is the director of AI projects at FLI, he’s the Senior Advisor to multiple AI companies, and he created the highest-rated enterprise text analytics platform. Ian is a research scientist at OpenAI, he’s the lead author of a deep learning textbook, and he’s the inventor of Generative Adversarial Networks. Listen to the podcast here or review the transcript here.

Dec 30, 2016 • 32min
FLI 2016 - A Year In Reivew
FLI's founders and core team -- Max Tegmark, Meia Chita-Tegmark, Anthony Aguirre, Victoria Krakovna, Richard Mallah, Lucas Perry, David Stanley, and Ariel Conn -- discuss the developments of 2016 they were most excited about, as well as why they're looking forward to 2017.

Nov 30, 2016 • 34min
Heather Roff and Peter Asaro on Autonomous Weapons
Drs. Heather Roff and Peter Asaro, two experts in autonomous weapons, talk about their work to understand and define the role of autonomous weapons, the problems with autonomous weapons, and why the ethical issues surrounding autonomous weapons are so much more complicated than other AI systems.

Oct 31, 2016 • 47min
Nuclear Winter With Alan Robock and Brian Toon
I recently sat down with Meteorologist Alan Robock from Rutgers University and physicist Brian Toon from the University of Colorado to discuss what is potentially the most devastating consequence of nuclear war: nuclear winter.

Sep 28, 2016 • 25min
Robin Hanson On The Age Of Em
Dr. Robin Hanson talks about the Age of Em, the future and evolution of humanity, and his research for his next book.

Sep 20, 2016 • 16min
Nuclear Risk In The 21st Century
In this podcast interview, Lucas and Ariel discuss the concepts of nuclear deterrence, hair trigger alert, the potential consequences of nuclear war, and how individuals can do their part to lower the risks of nuclear catastrophe.

Aug 30, 2016 • 43min
Concrete Problems In AI Safety With Dario Amodei And Seth Baum
Interview with Dario Amodei of OpenAI and Seth Baum of the Global Catastrophic Risk Institute about studying short-term vs. long-term risks of AI, plus lots of discussion about Amodei's recent paper, Concrete Problems in AI Safety.


