

The Existential Hope Podcast
Foresight Institute
The Existential Hope Podcast features in-depth conversations with people working on positive, high-tech futures. We explore how the future could be much better than today—if we steer it wisely.Hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite the scientists, founders, and philosophers shaping tomorrow’s breakthroughs— AI, nanotech, longevity biotech, neurotech, space, smarter governance, and more.About Foresight Institute: For 40 years the independent nonprofit Foresight Institute has mapped how emerging technologies can serve humanity. Its Existential Hope program is the North Star: mapping the futures worth aiming for and the breakthroughs needed to reach them. This podcast is that exploration in public. Follow along and help tip the century toward success.Explore more: Transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X Hosted on Acast. See acast.com/privacy for more information.
Episodes
Mentioned books

May 13, 2026 • 51min
The AI future where humans get paid to be creative
Most AI futures give us two options: mass unemployment, or a government handout to soften the blow. But what if there's a third option, one centered on completely new categories of creative work that don't yet exist, where people get paid for contributing to AI rather than replaced by it?In this episode, we talk with Jaron Lanier, pioneer of virtual reality and scientist at Microsoft Research. He proposes a radically different way of thinking about AI, and unpacks its consequences from AI safety to the future of the economy.We touch on:The case for thinking of AI not as an alien intelligence, but rather as a collaboration of human dataHow this reframe helps you understand the failures of current AI systems, and why so many of the industry's most powerful figures seem to be losing their grip on realityA practical approach to AI safety inspired by multi-factor authentication in cybersecurityWhy universal basic income is unstable, and why a creativity economy (where people earn from their contributions to AI) could be a better way of distributing the benefits of AIHow to be an optimist about technological progress while acknowledging the risks and being critical of certain developmentsWhy history gives us the most rational grounds for optimism about our future with AIOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.

Apr 28, 2026 • 50min
Teaching AI empathy using brain signals
AIs could get much better at understanding what we truly value if we gave them access to our brain signals. And doing that is becoming easier than ever before.In this episode, we talk with Thorsten Zander, professor at Brandenburg University of Technology and co-founder of Zander Labs. He coined the concept of passive brain-computer interfaces: devices that read brain signals to decode a user's mental state, non-invasively and without any effort on their part. We cover:What non-invasive brain-computer interfaces (BCIs) can actually pick up from brain signals, and why that's very different from reading your thoughts or internal monologueThe hardware and software breakthroughs that are finally making passive BCIs wearable and affordableHow continuous neural feedback could dramatically improve AI training compared to current methods based on human ratingsWhy Thorsten believes passive BCIs may offer the most concrete path to solving the AI alignment problemThe risk of social networks exploiting unconscious brain reactions to manipulate people, and why regulation alone is unlikely to be enough0:00 Cold open0:56 What are passive brain-computer interfaces, and how are they different from Neuralink?3:23 What are the applications of passive brain-computer interfaces?4:33 What people get wrong about BCIs: reading thoughts vs. mental states6:14 How passive BCIs could transform AI training and help AI understand you better11:40 The misuse risk: how social networks could exploit unconscious brain reactions to manipulate political opinions16:00 How close is mass adoption? The hardware and software breakthroughs making BCIs wearable20:08 Why Germany's cybersecurity agency invested €30M in passive BCI research24:22 Invasive vs non-invasive: how Europe and the US are taking different approaches to brain-computer interfaces28:52 Should AI act on your first instinct? 32:56 How passive BCIs could solve the AI alignment problem (and why previous approaches have fallen short)35:26 From professor to startup founder: what Thorsten learned making the leap41:27 Best case scenario: what the world looks like when AI truly understands human values46:03 How to get started in neuroadaptive AI and passive BCIs48:18 The best advice Thorsten ever receivedOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.

Apr 15, 2026 • 58min
How to build a career that actually changes the world
More and more people want to make a real-world difference with their career. Very few of them do. Why are careers in consultancy or finance still so much more mainstream than careers tackling the world's biggest problems?In this episode, we talk with Jan-Willem van Putten, co-founder of the School for Moral Ambition, an organization that is building clear pathways for people who want to do work that actually changes the world.We discuss:The three main bottlenecks stopping talented people from doing high-impact workHow to find important yet neglected causes to work on, and the School for Moral Ambition top picksWhy movements that want to change the world often fail, and what effective advocates do differentlyHow to figure out which problems your specific background and skills are best placed to solveThe real struggles of leaving a prestigious career behind, from lifestyle creep to peer support, and what makes people say it was worth itTimestamps:0:00 Cold open2:12 From thesis on talent waste to joining consultancy: Jan-Willem's journey4:29 Why did you step away from management consulting?6:35 Focusing on impact vs. status: can you persuade people?8:40 What is the School for Moral Ambition?11:58 Is there now a real field for impact-driven careers?12:58 Cause areas: food transition and tobacco control17:10 How to prioritize problems to work on: the Triple-S framework21:11 Next cause areas: tax fairness and democracy23:00 What does the fellowship journey look like?25:06 The profile of an ambitious idealist: startup drive meets activist values27:43 Noble losers: why social movements fail30:56 Is moral ambition only for the privileged?36:04 How to cultivate a higher level of ambition in society40:31 Feeling hopeless about big problems? New tools change the game42:19 What holds people back from making the leap to meaningful work46:12 What do fellows find most rewarding?47:32 What does success look like in 10 years?51:25 Where to start if you want to shift to a career that makes a difference55:28 Best advice ever received: the case for taking actionOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.

Apr 2, 2026 • 53min
How AI could improve the lives of trillions of animals
Constance Li, founder of Sentient Futures who steers AI and emerging tech toward improving nonhuman lives. She discusses the vast scale of farmed-animal suffering, precision livestock farming using sensors and computer vision, research into AI-assisted interspecies communication, genetic welfare possibilities, and the biggest hurdles of funding and attention.

Mar 19, 2026 • 51min
How dating an AI could improve your real love life | David Eagleman
Having an AI boyfriend or girlfriend might seem creepy, but what if it helped you get better at human relationships? In this episode, we talk with David Eagleman, a professor of neuroscience at Stanford, bestselling author, and science communicator. We discuss how AI and other technologies can help us become better humans – wiser, kinder and more empathetic, not just more productive. We get a neuroscientist’s take on how human and artificial intelligence interact, including:How to use AI to better understand other people and improve our relationships.Using debate AIs in schools to make younger generations better at critical thinking and grasping both sides of an argument.Is AI making our lives too easy by removing the friction we need to learn?Technologies that could expand what’s possible with our brain, from mind uploading to brain-to-brain communication.Timestamps:0:00 Cold open1:38 How David Eagleman became a neuroscientist4:46 How malleable is the brain?6:29 Can AI make us better humans? The Reddit debate bot experiment11:00 AI relationships and becoming better at dating real people14:24 Using AI to hear his late father's voice again18:26 Mind uploading and digital immortality23:27 What technology could make us more kind and empathetic24:04 How AI could revolutionize debate education and critical thinking28:30 Why AI needs a "tough love" mode to help us grow30:17 Does AI making life easier rob us of useful friction for learning?34:21 Why brain-to-brain communication probably won't help us understand each other37:29 Could neurotechnology let us experience the world as another species?41:58 The current state of neuroscience and where it's heading48:05 How to get started if you're inspired by this conversationOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.

Feb 27, 2026 • 1h 9min
How the whole world can exceed Swiss living standards by 2100 (backed by data)
What would the world look like if the poorest country was as rich as Switzerland is today? It turns out we could actually see it happen by 2100, and with an economic growth that is similar to the one we have been experiencing for the past 20 years.In this episode, we talk with Marc Canal, Senior Fellow at the McKinsey Global Institute, and co-author of the book A Century of Plenty. We unpack what a hundred years of data tells us about human progress, and map out the steps to an ambitious scenario we can build by the end of the century.We discuss:How much the world has actually changed since 1925: from one in five children dying before age five in Spain, to life expectancy growing by 40 years globally.What it would take to make today’s Swiss living standards the world’s floor by 2100 (while richer countries grow far beyond it), from energy efficiency to birth rates and geopolitics.How data shows economic growth is actually good for the climate and for human happiness.Why achieving a prosperous world currently depends more on our collective belief that progress is possible than on resource constraints.How you can thrive in an AI world, where 57% of work hours can be automated, by leaning into the “messy” jobs.Timestamps:0:00 - Cold open1:54 - Why the McKinsey Global Institute wrote “A Century of Plenty” 5:20 - What was the world like in 1925? 10:04 - The most surprising stats from 100 years of progress16:03 - Defining the “empowerment line” vs. the poverty line19:30 - Projecting 2100: can we make Switzerland the global "floor"? 22:26 - The 5 conditions for achieving a world of plenty26:14 - Can we grow the economy without sacrificing the environment?28:23 - Economic growth vs. climate change: mitigation and adaptation 34:05 - What are the biggest challenges to the “progress machine”? 36:30 - The demographic crisis, and solving falling fertility rates45:20 - Will AI speed up human innovation?48:21 - Geopolitics: is the world really de-globalizing? 52:30 - The crisis of hope: why are we so pessimistic?56:26 - How different nations reach the frontier of progress58:49 - Building a new culture of growth1:01:09 - Does economic progress actually make us happier?1:05:39 - How you can help make a century of plenty probableOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.

Feb 19, 2026 • 1h 26min
How your personal moral compass helps you build a better world | SJ Beard
To make the future go well, we might not need a perfect model for its end state, or an abstract philosophical theory to guide us. Can your own sense of “the right thing to do” actually help make the world better?In this episode we talk with SJ Beard, researcher at the Centre for the Study of Existential Risk, and author of the book “Existential Hope”.Some of the topics we discuss:How to shift our focus from "preventing the end of the world" to actively building a future worth living.Why aiming for a “happy ever after” state of the world might be dangerous, and why improving the world one generation at a time is less likely to backfire.Relying on our own sense of “the right thing to do” as a practical guide to make the world better.Why decisions about AI and global risk need input from a broad mix of people and their real-world experiences, not just experts at the top.Why building AI with compassion and curiosity about human values may be safer than giving it a rigid list of rules to follow.Timestamps:[01:31] SJ’s background in philosophy and existential risk[02:02] Why write a book on existential hope?[04:43] Defining existential hope, and its relationship with existential risks and existential anxiety[11:09] Human agency without the guilt[13:59] Why there are no truly "natural" disasters[16:49] Why we shouldn’t try to build a perfect utopia[19:05] Protopia: is iterative improvement enough?[22:19] Defining progress: what does it mean to "get better"?[26:13] Protopia vs. viatopia: setting goals and achieving a great future[29:48] Existential safety as a collective project[35:06] Using participatory tools to make global decisions[36:32] Making existential hope reasonably demanding[40:06] Can we achieve systemic change in a tech-focused world?[46:00] Concrete socio-technical projects for AI safety[49:02] Aligning AI by building its character[51:45] The importance of history in building a good future[54:24] Key 17th-century ideas that are shaping modern society[58:20] Cultivating "humanity as a virtue"[01:04:37] Lessons from nuclear near-misses: the example of Petrov[01:09:20] The trade-offs of a humanistic, bottom-up approach to decision-making[01:12:16] Literacy vs. orality: how ideas become simplified[01:16:45] Meme culture and the transmission of deep context[01:18:48] How writing the book changed SJ’s mind[01:21:38] SJ Beard’s vision for existential hopeOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.

Feb 4, 2026 • 49min
Raising science ambition: how to identify the highest-impact research for an AI world | Anastasia Gamick
Anastasia Gamick, co‑founder and CEO of Convergent Research, builds startup‑style teams to create public‑good scientific capabilities. She discusses Focused Research Organizations, high‑impact projects like synapse mapping and provably safe software, prioritizing defensive bio and AI tooling, and how scientists can find and fund work that matters most for an AI future.

Jan 21, 2026 • 1h 1min
Jason Crawford on how technology expands human choice and control
Our fast-paced world isn’t spinning out of our control; we’re actually becoming more capable of steering it than ever before. Throughout history, technological progress has expanded human agency, that is our ability to choose our destiny rather than being subject to the whims of nature.Jason Crawford, founder of the Roots of Progress Institute, joins the podcast to discuss The Techno-Humanist Manifesto, his book exploring his philosophy of progress centered around human life and wellbeing. In our conversation, we dive into the core arguments of the manifesto:How we are more in control of our lives than ever beforeWhy we should reframe the goal of “stopping climate change” into “controlling climate change” and work toward installing a “thermostat for the Earth”The value of nature and its interaction with humanityAllowing ourselves to celebrate human achievement and industrial civilizationThe concept of “solutionism”, as a kind of optimism that acknowledges risks while keeping a proactive attitude towards solving problemsWhy two common fears around the slowing of progress – that we could run out of natural resources or new ideas – are actually unfoundedThe possibility that AI represents a transformation as significant as the Industrial Revolution or the invention of agricultureHow to rebuild a culture of progress in the 21st century, from reforming scientific institutions to creating new, non-dystopian science fictionChapters:[00:00] Cold open[01:30] Intro: Jason Crawford and the Techno-Humanist Manifesto[04:10] Defining progress as the expansion of human agency[06:16] How to use our newfound agency to live a meaningful life[10:07] Climate control: installing a “thermostat” for the Earth[13:26] Anthropocentrism and the value of nature[19:41] Ode to man: celebrating human achievement[20:53] Solutionism: believing in our problem-solving abilities to tackle risks[26:26] Why pessimism sounds smart but misses the solution space[31:29] The myth of finite natural resources and the power of knowledge[34:27] Why we are getting better at finding ideas faster than they get harder to find[39:03] The Intelligence Age: a new mode of production[41:19] Amplifying human agency in an AI-driven world[43:09] Developing a healthy relationship with AI and attention[46:28] The culture of progress and why we soured on the future[50:10] Building the infrastructure for a global progress movement[53:54] A 20-year vision for progress studies in the mainstream[57:33] High-leverage regulations for progress: from nuclear to supersonic flight[58:57] Jason Crawford’s existential hope visionOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.

Jan 14, 2026 • 1h 10min
Elle Griffin on researching the ideal society, from utopian books to real-world examples
Elle Griffin, a writer and researcher focused on utopian futures, explores the challenge of envisioning ideal societies. She reflects on the influence of classic utopian literature and discusses innovative concepts like tax autonomy for states, a la carte federations, and the Mondragon model of worker cooperatives. Elle also addresses the governance of AI, advocating for employee control to ensure ethical practices. With her insights, she highlights how we can build a wiser, more equitable future using existing examples and creative thinking.


