ForeCast

Forethought
undefined
Aug 3, 2025 • 1h 9min

[AI Narration] No Easy Eutopia

A narrated essay explores whether a near-perfect future is actually easy to reach. The conversation compares survival versus flourishing and analyzes many ways abundance could still be morally catastrophic. They survey risks from digital beings, space resource lock-in, population ethics, and value fragility. The piece argues common intuitions make utopia seem closer than it likely is.
undefined
Aug 3, 2025 • 11min

[AI Narration] Introducing Better Futures

William MacAskill, philosopher and effective altruism leader, reads his essay 'Introducing Better Futures.' He contrasts avoiding catastrophe with promoting human flourishing. He discusses scale, neglectedness, tractability, and the idea of 'viatopia' as a way to steer society toward near-best outcomes. Short, thought-provoking, and forward-looking.
undefined
Aug 3, 2025 • 33min

[AI Narration] The Basic Case for Better Futures: SF Model Analysis

This is an AI narration of "The Basic Case for Better Futures: SF Model Analysis" by William MacAskill, Philip Trammell. The article was first released on 3rd August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.
undefined
Aug 3, 2025 • 60min

[AI Narration] How to Make the Future Better

William MacAskill, philosopher and author on longtermism and effective altruism, narrates his essay on shaping the distant future. He outlines ways to keep options open during transformative change. He examines preventing AI-enabled autocracy, governing superintelligence, and rights for digital beings. He discusses space governance, slowing intelligence explosions, and improving collective decision making.
undefined
Aug 3, 2025 • 1h 17min

[AI Narration] Convergence and Compromise

A narrated deep dive into whether humanity can converge on broadly shared values or instead settle for partial overlap plus bargaining. They use island and flight analogies to map paths to great futures. Topics include moral realism versus subjectivism, risks like early lock‑in and threats to bargains, and whether abundance or superintelligent advice could change motivations.
undefined
Jul 9, 2025 • 1h 54min

AI Rights for Human Safety (with Peter Salib and Simon Goldstein)

Join Peter Salib, an expert in law and AI risk, and Simon Goldstein, a philosopher focusing on AI safety, as they explore the vital topic of AI rights. They discuss how establishing legal frameworks can prevent conflicts between humans and artificial intelligence. The conversation dives into the ethical implications of AI ownership and rights, touching on property laws and the importance of rights in fostering cooperation. They also examine the potential for AI to enhance human safety and welfare, raising critical questions about future governance and societal impact.
undefined
29 snips
Jun 16, 2025 • 2h 55min

Inference Scaling, AI Agents, and Moratoria (with Toby Ord)

Toby Ord, a Senior Researcher at Oxford University focused on existential risks, dives into the intriguing concept of the ‘scaling paradox’ in AI. He discusses how scaling challenges affect AI performance, particularly the diminishing returns of deep learning models. The conversation also touches on the ethical implications of AI governance and the importance of moratoria on advanced technologies. Moreover, Toby examines the shifting landscape of AI's capabilities and the potential risks for humanity, emphasizing the need for a balance between innovation and safety.
undefined
May 21, 2025 • 28min

[AI Narration] The Industrial Explosion

This is an AI narration of "The Industrial Explosion" by Tom Davidson, Rose Hadshar. The article was first released on 21th May 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.
undefined
28 snips
Apr 16, 2025 • 1h 15min

AI Tools for Existential Security (with Lizka Vaintrob)

Lizka Vaintrob discusses ‘AI Tools for Existential Security’, co-authored with Owen Cotton-Barratt. You can read a full transcript here. To see all our published research, visit forethought.org/research.
undefined
Apr 4, 2025 • 24min

[AI Narration] Will Compute Bottlenecks Prevent a Software Intelligence Explosion?

Tom Davidson, a research analyst, dives into the intriguing concept of a software intelligence explosion and the potential hindrances posed by compute bottlenecks. He explains how AI could improve exponentially without the need for additional hardware. Davidson tackles objections regarding empirical machine learning experiments while critiquing economic models that predict strict compute limitations. Finally, he suggests alternative pathways for achieving superintelligence, emphasizing the dynamic adaptability of production methods to circumvent these bottlenecks.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app