The Intelligence Horizon

The Intelligence Horizon
undefined
Mar 20, 2026 • 1h 31min

Nathan Labenz: Why Transformative AI is Coming With or Without New Breakthroughs.

Nathan Labenz (Host of The Cognitive Revolution Podcast) joins The Intelligence Horizon to make the case that, with or without major algorithmic breakthroughs, we already have enough evidence to conclude that AI is going to completely transform the economy and geopolitical landscape in the coming years. From there, we dig into one of the strangest features of the current moment: despite rapid capability gains and compressed timelines, experts across fields like economics and forecasting still fundamentally disagree on questions like whether recursive self-improvement will be explosive or gradual and how deeply AI will restructure the economy. Nathan walks through the competing paradigms that keep these communities talking past each other and explains why years of new data on AI capabilities and real-world economic impacts have done surprisingly little to reduce this disagreement. We then turn to alignment and safety, where Nathan explains that no one he has spoken to across hundreds of conversations has been able to point to a single approach they fully trust to solve the problem. Instead, the most credible path forward may be a defense-in-depth strategy, combining interpretability, AI control techniques, formal verification for cybersecurity, and bio preparedness, that collectively might enable us to muddle through. Finally, we discuss the US-China dynamic, where Nathan pushes back on the prevailing race framing. He argues that Americans and Chinese share far more in common with each other as humans than either does with AI, and that the adversarial posture makes the cooperation we actually need much harder to achieve. Follow the rest of The Intelligence Horizon! Instagram: @theintelligencehorizon TikTok: @theintelligencehorizon Spotify: The Intelligence Horizon LinkedIn: The Intelligence Horizon Co-hosts: Owen Zhang and Will Sanok Dufallo Media Heads: Chloe Park, and Yasmin Rodriguez RasconVideo Editor: Elly Zhang
undefined
16 snips
Feb 10, 2026 • 1h 2min

Zoë Hitzig Left OpenAI. Here’s What She Told Us Weeks Before.

Zoë Hitzig, a research scientist studying economics, privacy, and AI governance, discusses diverging corporate incentives and the need for robust governance. She explores how people actually use ChatGPT, the shifting nature of jobs and software work, access and welfare benefits beyond GDP, and proposals like universal basic compute and new corporate accountability mechanisms.
undefined
Feb 10, 2026 • 1h 19min

Thomas Woodside (Secure AI Project): What SB 53 Actually Does and What Comes Next in AI Policy

Thomas Woodside (Co-Founder & Senior Policy Advisor at the Secure AI Project, and a lead advocate for California’s SB 53) joins The Intelligence Horizon Podcast to break down what SB 53 actually does and what it signals about where AI regulation is headed. We cover the bill’s core requirements, the logic behind them, and how they aim to reduce catastrophic risk from frontier models. We also situate SB 53 in the broader policy landscape, compare it to SB 1047, and discuss what Thomas thinks the next concrete governance steps should be as AI capabilities continue to scale.Follow the rest of The Intelligence Horizon!Instagram: @theintelligencehorizonTikTok: @theintelligencehorizonYouTube: @TheIntelligenceHorizonSpotify: The Intelligence HorizonLinkedIn: The Intelligence HorizonEmail: theintelligencehorizon@gmail.comCo-hosts: Owen Zhang and Will Sanok DufalloProduction Lead: Kaitlyn SmithSocial Media Heads: Hailey Love, Chloe Park, and Yasmin Rodriguez Rascon
undefined
Nov 16, 2025 • 1h 38min

Thomas Larsen (AI 2027): We have to start preparing for AGI

In this episode, Thomas Larsen of the AI Futures Project joins us to dissect the public's reaction to the widely influential paper "AI 2027," which he co-authored, and makes the case that superintelligent AI is highly likely within our lifetimes — and plausibly imminent in the next few years. Thomas also lays out why he’s pessimistic that risks from misaligned and misused AI will be handled in time. This was a fascinating and thought-provoking discussion on the challenges ahead in AI security.Check out "AI 2027" here: https://ai-2027.comLearn more about the AI Futures Project here: https://ai-futures.orgFollow the rest of The Intelligence Horizon!Instagram: @theintelligencehorizonTikTok: @theintelligencehorizonSpotify: The Intelligence HorizonLinkedIn: The Intelligence HorizonFeel free to also reach out at theintelligencehorizon@gmail.comCo-hosts: Owen Zhang and Will Sanok DufalloVideo Producer: Kaitlyn SmithSocial Media Manager: Nancy Javkhlan

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app