

ForeCast
Forethought
ForeCast is a podcast from Forethought, where we hear from the authors about new research.
Episodes
Mentioned books

10 snips
Sep 9, 2025 • 1h 45min
Is Gradual Disempowerment Inevitable? (with Raymond Douglas)
Raymond Douglas, a researcher at TELIC Research, dives into the societal effects of AI and the concept of gradual disempowerment. He discusses the existential risks tied to human-level AIs and the complex dynamics between human control and AI governance. The conversation highlights how AI impacts economic agency and emerging inequalities, as technology supersedes human capabilities. Douglas also emphasizes the urgent need for policy responses and ethical engagement to navigate the cultural and moral dimensions of AI's integration into society.

Sep 7, 2025 • 55min
[Article] Intelsat as a Model for International AGI Governance
Explore how Intelsat could serve as a blueprint for international cooperation in regulating Artificial General Intelligence. The discussion contrasts its historical successes with challenges faced during the Cold War. Delve into the evolution of global satellite governance and how key agreements shaped the balance of power. Finally, discover the implications of these historical insights for future multilateral AGI frameworks, emphasizing the importance of collaboration among nations for technological advancement.

Aug 31, 2025 • 1h 51min
[Article] Will AI R&D Automation Cause a Software Intelligence Explosion?
Dive into the thrilling possibilities of AI automating its own research and development. Discover how advancements in deep learning and LLMs are shaping the future of software intelligence. Explore the concept of exponential growth in AI capabilities and the crucial role of effective governance in managing rapid technological change. Learn about feedback loops that can propel AI advancements and the challenges posed by diminishing returns. The discussion reveals how we might navigate this exciting yet uncertain landscape.

25 snips
Aug 28, 2025 • 1h 24min
Should AI Agents Obey Human Laws? (with Cullen O'Keefe)
Cullen O'Keefe, Director of Research at the Institute for Law & AI, dives deep into the complexities of law-following AI. He discusses how AI agents can navigate legal frameworks and the ethical dilemmas of using them as 'henchmen' for human interests. O'Keefe examines the future of AI in automating tasks, the vital need for accountability, and the challenges in aligning AI behavior with human values. He emphasizes the importance of updating regulatory structures to manage AI's potential misuse while safeguarding ethical standards.

Aug 26, 2025 • 2h 10min
[Article] AI-Enabled Coups: How a Small Group Could Use AI to Seize Power
Explore how advanced AI could empower a small group to execute coups with alarming efficiency. The discussion highlights the risks of loyalty manipulation and power concentration that could disrupt democracy. Scenarios are laid out where exclusive access to AI leads to unprecedented military and societal upheaval. The conversation also critiques existing governance frameworks, advocating for new safeguards to protect democratic systems from emerging AI threats.

Aug 20, 2025 • 30min
[AI Narration] Could One Country Outgrow the Rest of the World After AGI?
This is an AI narration of "Could One Country Outgrow the Rest of the World After AGI? Economic Analysis of Superexponential Growth" by Tom Davidson. The article was first released on 20th August 2025.
You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.

24 snips
Aug 17, 2025 • 2h 14min
How Can We Prevent AI-Enabled Coups? (with Tom Davidson)
Tom Davidson, a Senior Research Fellow at Forethought, dives into the urgent topic of AI-enabled coups. He discusses the risks posed by AI in consolidating power illegitimately, emphasizing the need for robust checks and balances. The conversation highlights the necessity of ethical oversight in military R&D and the importance of stakeholder collaboration. Davidson warns about potential manipulation within AI systems and advocates for clear guidelines to protect democratic values. With insights from historical precedents, he stresses the need for vigilance in governance.

Aug 4, 2025 • 2h 54min
Should We Aim for Flourishing Over Mere Survival? (with Will MacAskill)
Will MacAskill, a philosopher and co-founder of 80,000 Hours, discusses his research series, ‘Better Futures.’ He delves into the importance of transitioning from mere survival to thriving in the face of existential risks. Topics include the interplay of human flourishing and ethical governance, the pursuit of an ideal future, and the complexities surrounding moral catastrophes. MacAskill emphasizes the need for collective action and philosophical reflection as we navigate the uncertain dynamics of AI and global challenges, shaping a more hopeful tomorrow.

Aug 4, 2025 • 1h 7min
[AI Narration] How quick and big would a software intelligence explosion be?
Delve into the fascinating concept of software intelligence explosions. Discover how advancements in AI could compress years of progress into mere months. Understand the critical parameters driving this acceleration and the significant uncertainties involved. The hosts explore the potential scale of AI researchers and what that could mean for future innovations. They also address the limitations of current models and the importance of cautious forecasting in this rapidly evolving field.

Aug 3, 2025 • 41min
[AI Narration] Persistent Path-Dependence
William MacAskill, philosopher focused on longtermism and effective altruism, outlines how near-term events can lock in far-future outcomes. He surveys mechanisms like AGI institutions, immortality, designed beings, and space settlement. Short-term power concentration and technological maturity can compound into near-irreversible lock-in. He argues these dynamics make steering the near future morally urgent.


