

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

Sep 5, 2019 • 38min
Not Cool Ep 3: Tim Lenton on climate tipping points
What is a climate tipping point, and how do we know when we’re getting close to one? On Episode 3 of Not Cool, Ariel talks to Dr. Tim Lenton, Professor and Chair in Earth System Science and Climate Change at the University of Exeter and Director of the Global Systems Institute. Tim explains the shifting system dynamics that underlie phenomena like glacial retreat and the disruption of monsoons, as well as their consequences. He also discusses how to deal with low certainty/high stakes risks, what types of policies we most need to be implementing, and how humanity’s unique self-awareness impacts our relationship with the Earth.
Topics discussed include:
Climate tipping points: impacts, warning signals
Evidence that climate is nearing tipping point?
IPCC warming targets
Risk management under uncertainty
Climate policies
Human tipping points: social, economic, technological
The Gaia Hypothesis

Sep 3, 2019 • 28min
Not Cool Ep 2: Joanna Haigh on climate modeling and the history of climate change
On the second episode of Not Cool, Ariel delves into some of the basic science behind climate change and the history of its study. She is joined by Dr. Joanna Haigh, an atmospheric physicist whose work has been foundational to our current understanding of how the climate works. Joanna is a fellow of The Royal Society and recently retired as Co-Director of the Grantham Institute on Climate Change and the Environment at Imperial College London. Here, she gives a historical overview of the field of climate science and the major breakthroughs that moved it forward. She also discusses her own work on the stratosphere, radiative forcing, solar variability, and more.
Topics discussed include:
History of the study of climate change
Overview of climate modeling
Radiative forcing
What’s changed in climate science in the past few decades
How to distinguish between natural climate variation and human-induced global warming
Solar variability, sun spots, and the effect of the sun on the climate
History of climate denial

Sep 3, 2019 • 36min
Not Cool Ep 1: John Cook on misinformation and overcoming climate silence
On the premier of Not Cool, Ariel is joined by John Cook: psychologist, climate change communication researcher, and founder of SkepticalScience.com. Much of John’s work focuses on misinformation related to climate change, how it’s propagated, and how to counter it. He offers a historical analysis of climate denial and the motivations behind it, and he debunks some of its most persistent myths. John also discusses his own research on perceived social consensus, the phenomenon he’s termed “climate silence,” and more.
Topics discussed include:
History of of the study of climate change
Climate denial: history and motivations
Persistent climate myths
How to overcome misinformation
How to talk to climate deniers
Perceived social consensus and climate silence

Sep 3, 2019 • 4min
Not Cool Prologue: A Climate Conversation
In this short trailer, Ariel Conn talks about FLI's newest podcast series, Not Cool: A Climate Conversation.
Climate change, to state the obvious, is a huge and complicated problem. Unlike the threats posed by artificial intelligence, biotechnology or nuclear weapons, you don’t need to have an advanced science degree or be a high-ranking government official to start having a meaningful impact on your own carbon footprint. Each of us can begin making lifestyle changes today that will help. We started this podcast because the news about climate change seems to get worse with each new article and report, but the solutions, at least as reported, remain vague and elusive. We wanted to hear from the scientists and experts themselves to learn what’s really going on and how we can all come together to solve this crisis.

Aug 30, 2019 • 49min
FLI Podcast: Beyond the Arms Race Narrative: AI and China with Helen Toner and Elsa Kania
Discussions of Chinese artificial intelligence often center around the trope of a U.S.-China arms race. On this month’s FLI podcast, we’re moving beyond this narrative and taking a closer look at the realities of AI in China and what they really mean for the United States. Experts Helen Toner and Elsa Kania, both of Georgetown University’s Center for Security and Emerging Technology, discuss China’s rise as a world AI power, the relationship between the Chinese tech industry and the military, and the use of AI in human rights abuses by the Chinese government. They also touch on Chinese-American technological collaboration, technological difficulties facing China, and what may determine international competitive advantage going forward.
Topics discussed in this episode include:
The rise of AI in China
The escalation of tensions between U.S. and China in AI realm
Chinese AI Development plans and policy initiatives
The AI arms race narrative and the problems with it
Civil-military fusion in China vs. U.S.
The regulation of Chinese-American technological collaboration
AI and authoritarianism
Openness in AI research and when it is (and isn’t) appropriate
The relationship between privacy and advancement in AI

Aug 16, 2019 • 1h 12min
AIAP: China's AI Superpower Dream with Jeffrey Ding
"In July 2017, The State Council of China released the New Generation Artificial Intelligence Development Plan. This policy outlines China’s strategy to build a domestic AI industry worth nearly US$150 billion in the next few years and to become the leading AI power by 2030. This officially marked the development of the AI sector as a national priority and it was included in President Xi Jinping’s grand vision for China." (FLI's AI Policy - China page) In the context of these developments and an increase in conversations regarding AI and China, Lucas spoke with Jeffrey Ding from the Center for the Governance of AI (GovAI). Jeffrey is the China lead for GovAI where he researches China's AI development and strategy, as well as China's approach to strategic technologies more generally.
Topics discussed in this episode include:
-China's historical relationships with technology development
-China's AI goals and some recently released principles
-Jeffrey Ding's work, Deciphering China's AI Dream
-The central drivers of AI and the resulting Chinese AI strategy
-Chinese AI capabilities
-AGI and superintelligence awareness and thinking in China
-Dispelling AI myths, promoting appropriate memes
-What healthy competition between the US and China might look like
Here you can find the page for this podcast: https://futureoflife.org/2019/08/16/chinas-ai-superpower-dream-with-jeffrey-ding/
Important timestamps:
0:00 Intro
2:14 Motivations for the conversation
5:44 Historical background on China and AI
8:13 AI principles in China and the US
16:20 Jeffrey Ding’s work, Deciphering China’s AI Dream
21:55 Does China’s government play a central hand in setting regulations?
23:25 Can Chinese implementation of regulations and standards move faster than in the US? Is China buying shares in companies to have decision making power?
27:05 The components and drivers of AI in China and how they affect Chinese AI strategy
35:30 Chinese government guidance funds for AI development
37:30 Analyzing China’s AI capabilities
44:20 Implications for the future of AI and AI strategy given the current state of the world
49:30 How important are AGI and superintelligence concerns in China?
52:30 Are there explicit technical AI research programs in China for AGI?
53:40 Dispelling AI myths and promoting appropriate memes
56:10 Relative and absolute gains in international politics
59:11 On Peter Thiel’s recent comments on superintelligence, AI, and China
1:04:10 Major updates and changes since Jeffrey wrote Deciphering China’s AI Dream
1:05:50 What does healthy competition between China and the US look like?
1:11:05 Where to follow Jeffrey and read more of his work
You Can take a short (4 minute) survey to share your feedback about the podcast here: https://www.surveymonkey.com/r/YWHDFV7
Deciphering China's AI Dream: https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
FLI AI Policy - China page: https://futureoflife.org/ai-policy-china/
ChinAI Newsletter: https://chinai.substack.com
Jeff's Twitter: https://twitter.com/jjding99
Previous podcast with Jeffrey: https://youtu.be/tm2kmSQNUAU

Aug 1, 2019 • 1h 10min
FLI Podcast: The Climate Crisis as an Existential Threat with Simon Beard and Haydn Belfield
Does the climate crisis pose an existential threat? And is that even the best way to formulate the question, or should we be looking at the relationship between the climate crisis and existential threats differently? In this month’s FLI podcast, Ariel was joined by Simon Beard and Haydn Belfield of the University of Cambridge’s Center for the Study of Existential Risk (CSER), who explained why, despite the many unknowns, it might indeed make sense to study climate change as an existential threat. Simon and Haydn broke down the different systems underlying human civilization and the ways climate change threatens these systems. They also discussed our species’ unique strengths and vulnerabilities –– and the ways in which technology has heightened both –– with respect to the changing climate.

Jun 28, 2019 • 38min
FLI Podcast: Is Nuclear Weapons Testing Back on the Horizon? With Jeffrey Lewis and Alex Bell
Nuclear weapons testing is mostly a thing of the past: The last nuclear weapon test explosion on US soil was conducted over 25 years ago. But how much longer can nuclear weapons testing remain a taboo that almost no country will violate?
In an official statement from the end of May, the Director of the U.S. Defense Intelligence Agency (DIA) expressed the belief that both Russia and China were preparing for explosive tests of low-yield nuclear weapons, if not already testing. Such accusations could potentially be used by the U.S. to justify a breach of the Comprehensive Nuclear-Test-Ban Treaty (CTBT).
This month, Ariel was joined by Jeffrey Lewis, Director of the East Asia Nonproliferation Program at the Center for Nonproliferation Studies and founder of armscontrolwonk.com, and Alex Bell, Senior Policy Director at the Center for Arms Control and Non-Proliferation. Lewis and Bell discuss the DIA’s allegations, the history of the CTBT, why it’s in the U.S. interest to ratify the treaty, and more.
Topics discussed in this episode:
- The validity of the U.S. allegations --Is Russia really testing weapons?
- The International Monitoring System -- How effective is it if the treaty isn’t in effect?
- The modernization of U.S/Russian/Chinese nuclear arsenals and what that means.
- Why there’s a push for nuclear testing.
- Why opposing nuclear testing can help ensure the US maintains nuclear superiority.

May 31, 2019 • 39min
FLI Podcast: Applying AI Safety & Ethics Today with Ashley Llorens & Francesca Rossi
In this month’s podcast, Ariel spoke with Ashley Llorens, the Founding Chief of the Intelligent Systems Center at the John Hopkins Applied Physics Laboratory, and Francesca Rossi, the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab and an FLI board member, about developing AI that will make us safer, more productive, and more creative. Too often, Rossi points out, we build our visions of the future around our current technology. Here, Llorens and Rossi take the opposite approach: let's build our technology around our visions for the future.

14 snips
May 23, 2019 • 1h 27min
AIAP: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson
Andrés Gómez Emilsson, consciousness researcher and QRI co-founder with a computational psychology background, and Mike Johnson, QRI executive director specializing in neuroscience and philosophy of mind, explore whether consciousness is formalizable. They discuss qualia realism, the Symmetry Theory of Valence, resonant brain harmonics, limits of functionalism, and implications for AI alignment and ethical value.


