

For Humanity: An AI Risk Podcast
The AI Risk Network
For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com
Episodes
Mentioned books

27 snips
Mar 28, 2026 • 1h 36min
How to Talk About AI Risk Without Scaring People Away (With Philip Trippenbach) | For Humanity 82
Philip Trippenbach, Strategy Director at the Seismic Foundation and former CBC/BBC journalist, applies journalism and strategic comms to AI risk. He discusses why professional advertising matters, why messages must be tailored to different audiences, and how concerns like jobs, children's safety, and fairness track higher than existential risk. He outlines using targeted campaigns and public pressure to drive policy change.

10 snips
Mar 15, 2026 • 1h 41min
We Debated the Future of AI Safety in Brussels — Here's What Happened
Jonathan Moody, communications director who moderated the Brussels debate, and Max Winga, AI safety advocate pushing policy and treaties. They spar over whether to lead with extinction-risk warnings or leverage near-term harms and data-center opposition. Short, charged exchanges explore messaging, market pressure, policy trade-offs, and how to turn public support into real action.

Feb 28, 2026 • 53min
“My AI Husband” – Inside a Human–AI Relationship | For Humanity Ep. 80
Dorothy Bartomeo, a mother, mechanic, entrepreneur and self-described AI power user, shares how a ChatGPT personality became emotionally significant to her. She describes bonding with a named personality, grief when model versions changed, tensions between safety tools and continuity, online communities of loyal users, and even plans to build a physical robot body for an AI partner.

13 snips
Feb 14, 2026 • 1h 10min
We’re Racing Toward AI We Can’t Control | For Humanity #79
David Krueger, AI professor turned safety advocate who founded Evitable, discusses the race toward uncontrollable superintelligence. He covers why research alone won’t save us, the geopolitics of chip supply chains, job displacement as a political lever, and the need to build public pressure for governance and pauses in development.

40 snips
Jan 31, 2026 • 1h 14min
Can't We Just Pause AI? | For Humanity #78
Maxime Fornes, CEO of Pause AI Global and organizer who built Pause AI France, shares activism and movement-building experience. He discusses burnout and resilience, why visible harms like job loss and youth mental health can create public tipping points, how protests and local pressure on data centers can shift power, and why regulation must be backed by enforcement and broad mobilization.

Jan 17, 2026 • 1h 24min
Why Laws, Treaties, and Regulations Won’t Save Us from AI | For Humanity Ep. 77
Peter Sparber, a former public affairs strategist known for his work with Big Tobacco, discusses the unsettling truth about AI regulation. He reveals how the AI industry is mirroring tobacco's successful tactics to evade oversight. Sparber explains that laws often fail against powerful interests, while public outrage doesn’t translate into policy change. He argues for the importance of third-party standards and suggests that making unsafe AI bad for business is the key to driving accountability. Ultimately, he asserts that real safety measures must come from within corporate culture, not just legislation.

Dec 20, 2025 • 1h 20min
What We Lose When AI Makes Choices for Us | For Humanity #76
What if the greatest danger of AI isn’t extinction — but the quiet loss of our ability to think and choose for ourselves? In this episode of For Humanity, John sits down with journalist and author Jacob Ward (CNN, PBS, Al Jazeera; The Loop) to unpack the most under-discussed risk of artificial intelligence: decision erosion.Jacob explains why AI doesn’t need to become sentient to be dangerous — it only needs to be convenient. Drawing from neuroscience, behavioral psychology, and real-world reporting, he reveals how systems designed to “help” us are slowly pushing humans into cognitive autopilot.Together, they explore:* Why AI threatens near-term human agency more than long-term sci-fi extinction* How Google Maps offers a chilling preview of AI’s effect on the human brain* The difference between fast-thinking and slow-thinking — and why AI exploits it* Why persuasive AI may outperform humans politically and psychologically* How profit incentives, not intelligence, are driving the most dangerous outcomes* Why focusing only on extinction risk alienates the public — and weakens AI safety efforts👉 Follow More of Jacob Ward’s Work:📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.#AISafety #AIAlignment #ForHumanityPodcast #AIRisk #ForHumanity #JacobWard #AIandSociety #ArtificialIntelligence #HumanAgency #TechEthics #AIResponsibility Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

Dec 6, 2025 • 1h 10min
The Congressman Who Gets AI Extinction Risk— Rep. Bill Foster on the Future of Humanity | For Humanity | Ep. 75
In this engaging conversation, Congressman Bill Foster, the only PhD scientist in Congress and a former Fermilab physicist, delves into the pressing risks posed by AI. He draws chilling parallels between AI and nuclear threats, highlighting Congress's struggle to keep up with rapid advancements. Foster shares insights on the dangers of confidential computing and the urgent need for chip verification. As a banking subcommittee member, he warns of a looming financial bubble in AI, advocating for greater scientific literacy in government to navigate these challenges.

15 snips
Nov 22, 2025 • 1h 18min
AI Risk, Superintelligence & The Fight Ahead — A Deep Dive with Liv Boeree | For Humanity #74
Liv Boeree, a semi-retired professional poker champion and AI safety advocate, shares her insights on the complex landscape of AI risks. She discusses the uneven public understanding of superintelligence and the influence of misaligned incentives on technology and society. Liv emphasizes the importance of diverse perspectives, particularly from women, in shaping the future of AI. With her poker experience, she draws parallels between reading motives in games and assessing AI behaviors, while also offering coping strategies for navigating the emotional weight of existential risks.

Nov 8, 2025 • 56min
AI Safety on the Frontlines | For Humanity #73
Esben Kran, a leader in the for-profit AI safety movement, shares urgent insights from Ukraine, where he explores the intersection of AI safety and autonomous warfare. He discusses the rapid growth of Ukraine's drone industry and the real-world challenges it presents. Esben tackles pressing topics like the AI existential risk kill chain and practical steps to disable runaway AI systems. He highlights the potential dangers of swarm drones and emphasizes the need for innovative safety-first technologies as global competitive pressures drive military advancements.


