Future of Life Institute Podcast

Future of Life Institute
undefined
Aug 31, 2018 • 44min

AI: Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins

Experts predict that artificial intelligence could become the most transformative innovation in history, eclipsing both the development of agriculture and the industrial revolution. And the technology is developing far faster than the average bureaucracy can keep up with. How can local, national, and international governments prepare for such dramatic changes and help steer AI research and use in a more beneficial direction? On this month’s podcast, Ariel spoke with Allan Dafoe and Jessica Cussins about how different countries are addressing the risks and benefits of AI, and why AI is such a unique and challenging technology to effectively govern. Allan is the Director of the Governance of AI Program at the Future of Humanity Institute, and his research focuses on the international politics of transformative artificial intelligence. Jessica is an AI Policy Specialist with the Future of Life Institute, and she's also a Research Fellow with the UC Berkeley Center for Long-term Cybersecurity, where she conducts research on the security and strategy implications of AI and digital governance. Topics discussed in this episode include: - Three lenses through which to view AI’s transformative power - Emerging international and national AI governance strategies - The risks and benefits of regulating artificial intelligence - The importance of public trust in AI systems - The dangers of an AI race - How AI will change the nature of wealth and power
undefined
Aug 16, 2018 • 1h 46min

The Metaethics of Joy, Suffering, and Artificial Intelligence with Brian Tomasik and David Pearce

What role does metaethics play in AI alignment and safety? How might paths to AI alignment change given different metaethical views? How do issues in moral epistemology, motivation, and justification affect value alignment? What might be the metaphysical status of suffering and pleasure?  What's the difference between moral realism and anti-realism and how is each view grounded?  And just what does any of this really have to do with AI? The Metaethics of Joy, Suffering, and AI Alignment is the fourth podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application. If you're interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space. In this podcast, Lucas spoke with David Pearce and Brian Tomasik. David is a co-founder of the World Transhumanist Association, currently rebranded Humanity+. You might know him for his work on The Hedonistic Imperative, a book focusing on our moral obligation to work towards the abolition of suffering in all sentient life. Brian is a researcher at the Foundational Research Institute. He writes about ethics, animal welfare, and future scenarios on his website "Essays On Reducing Suffering."  Topics discussed in this episode include: -What metaethics is and how it ties into AI alignment or not -Brian and David's ethics and metaethics -Moral realism vs antirealism -Emotivism -Moral epistemology and motivation -Different paths to and effects on AI alignment given different metaethics -Moral status of hedonic tones vs preferences -Can we make moral progress and would this mean? -Moving forward given moral uncertainty
undefined
Jul 31, 2018 • 2h

Six Experts Explain the Killer Robots Debate

Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it’s complicated. In this month’s podcast, Ariel spoke with experts from a variety of perspectives on the current status of LAWS, where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre, artificial intelligence professor Toby Walsh, Article 36 founder Richard Moyes, Campaign to Stop Killer Robots founders Mary Wareham and Bonnie Docherty, and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro. If you don't have time to listen to the podcast in full, or if you want to skip around through the interviews, each interview starts at the timestamp below: Paul Scharre: 3:40 Toby Walsh: 40:50 Richard Moyes: 53:30 Mary Wareham & Bonnie Docherty: 1:03:35 Peter Asaro: 1:32:40
undefined
Jul 16, 2018 • 1h 23min

AIAP: AI Safety, Possible Minds, and Simulated Worlds with Roman Yampolskiy

What role does cyber security play in alignment and safety? What is AI completeness? What is the space of mind design and what does it tell us about AI safety? How does the possibility of machine qualia fit into this space? Can we leak proof the singularity to ensure we are able to test AGI? And what is computational complexity theory anyway? AI Safety, Possible Minds, and Simulated Worlds is the third podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application. In this podcast, Lucas spoke with Roman Yampolskiy, a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Digital Forensics, Games, Genetic Algorithms, and Pattern Recognition. He is an author of over 100 publications including multiple journal articles and books.  Topics discussed in this episode include: -Cyber security applications to AI safety -Key concepts in Roman's papers and books -Is AI alignment solvable? -The control problem -The ethics of and detecting qualia in machine intelligence -Machine ethics and it's role or lack thereof  in AI safety -Simulated worlds and if detecting base reality is possible -AI safety publicity strategy
undefined
Jun 29, 2018 • 53min

Mission AI - Giving a Global Voice to the AI Discussion With Charlie Oliver and Randi Williams

How are emerging technologies like artificial intelligence shaping our world and how we interact with one another? What do different demographics think about AI risk and a robot-filled future? And how can the average citizen contribute not only to the AI discussion, but AI's development? On this month's podcast, Ariel spoke with Charlie Oliver and Randi Williams about how technology is reshaping our world, and how their new project, Mission AI, aims to broaden the conversation and include everyone's voice. Charlie is the founder and CEO of the digital media strategy company Served Fresh Media, and she's also the founder of Tech 2025, which is a platform and community for people to learn about emerging technologies and discuss the implications of emerging tech on society. Randi is a doctoral student in the personal robotics group at the MIT Media Lab. She wants to understand children's interactions with AI, and she wants to develop educational platforms that empower non-experts to develop their own AI systems.
undefined
Jun 14, 2018 • 1h 15min

AIAP: Astronomical Future Suffering and Superintelligence with Kaj Sotala

In the classic taxonomy of risks developed by Nick Bostrom, existential risks are characterized as risks which are both terminal in severity and transgenerational in scope. If we were to maintain the scope of a risk as transgenerational and increase its severity past terminal, what would such a risk look like? What would it mean for a risk to be transgenerational in scope and hellish in severity? In this podcast, Lucas spoke with Kaj Sotala, an associate researcher at the Foundational Research Institute. He has previously worked for the Machine Intelligence Research Institute, and has publications on AI safety, AI timeline forecasting, and consciousness research. Topics discussed in this episode include: -The definition of and a taxonomy of suffering risks -How superintelligence has special leverage for generating or mitigating suffering risks -How different moral systems view suffering risks -What is possible of minds in general and how this plays into suffering risks -The probability of suffering risks -What we can do to mitigate suffering risks
undefined
May 31, 2018 • 42min

Nuclear Dilemmas, From North Korea to Iran with Melissa Hanham and Dave Schmerler

With the U.S. pulling out of the Iran deal and canceling (and potentially un-canceling) the summit with North Korea, nuclear weapons have been front and center in the news this month. But will these disagreements lead to a world with even more nuclear weapons? And how did the recent nuclear situations with North Korea and Iran get so tense? To learn more about the geopolitical issues surrounding North Korea’s and Iran’s nuclear situations, as well as to learn how nuclear programs in these countries are monitored, Ariel spoke with Melissa Hanham and Dave Schmerler on this month's podcast. Melissa and Dave are both nuclear weapons experts with the Center for Nonproliferation Studies at Middlebury Institute of International Studies, where they research weapons of mass destruction with a focus on North Korea. Topics discussed in this episode include: the progression of North Korea's quest for nukes, what happened and what’s next regarding the Iran deal, how to use open-source data to monitor nuclear weapons testing, and how younger generations can tackle nuclear risk. In light of the on-again/off-again situation regarding the North Korea Summit, Melissa sent us a quote after the podcast was recorded, saying: "Regardless of whether the summit in Singapore takes place, we all need to set expectations appropriately for disarmament. North Korea is not agreeing to give up nuclear weapons anytime soon. They are interested in a phased approach that will take more than a decade, multiple parties, new legal instruments, and new technical verification tools."
undefined
Apr 30, 2018 • 58min

What are the odds of nuclear war? A conversation with Seth Baum and Robert de Neufville

What are the odds of a nuclear war happening this century? And how close have we been to nuclear war in the past? Few academics focus on the probability of nuclear war, but many leading voices like former US Secretary of Defense, William Perry, argue that the threat of nuclear conflict is growing. On this month's podcast, Ariel spoke with Seth Baum and Robert de Neufville from the Global Catastrophic Risk Institute (GCRI), who recently coauthored a report titled A Model for the Probability of Nuclear War. The report examines 60 historical incidents that could have escalated to nuclear war and presents a model for determining the odds are that we could have some type of nuclear war in the future.
undefined
Apr 25, 2018 • 1h 25min

AIAP: Inverse Reinforcement Learning and Inferring Human Preferences with Dylan Hadfield-Menell

Inverse Reinforcement Learning and Inferring Human Preferences is the first podcast in the new AI Alignment series, hosted by Lucas Perry. This series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across a variety of areas, such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we will hope that you join in the conversations by following or subscribing to us on Youtube, Soundcloud, or your preferred podcast site/application. In this podcast, Lucas spoke with Dylan Hadfield-Menell, a fifth year Ph.D student at UC Berkeley. Dylan’s research focuses on the value alignment problem in artificial intelligence. He is ultimately concerned with designing algorithms that can learn about and pursue the intended goal of their users, designers, and society in general. His recent work primarily focuses on algorithms for human-robot interaction with unknown preferences and reliability engineering for learning systems. Topics discussed in this episode include: -Inverse reinforcement learning -Goodhart’s Law and it’s relation to value alignment -Corrigibility and obedience in AI systems -IRL and the evolution of human values -Ethics and moral psychology in AI alignment -Human preference aggregation -The future of IRL
undefined
Mar 30, 2018 • 58min

Navigating AI Safety -- From Malicious Use to Accidents

Is the malicious use of artificial intelligence inevitable? If the history of technological progress has taught us anything, it's that every "beneficial" technological breakthrough can be used to cause harm. How can we keep bad actors from using otherwise beneficial AI technology to hurt others? How can we ensure that AI technology is designed thoughtfully to prevent accidental harm or misuse? On this month's podcast, Ariel spoke with FLI co-founder Victoria Krakovna and Shahar Avin from the Center for the Study of Existential Risk (CSER). They talk about CSER's recent report on forecasting, preventing, and mitigating the malicious uses of AI, along with the many efforts to ensure safe and beneficial AI.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app