
ForeCast [AI Narration] Persistent Path-Dependence
Aug 3, 2025
William MacAskill, philosopher focused on longtermism and effective altruism, outlines how near-term events can lock in far-future outcomes. He surveys mechanisms like AGI institutions, immortality, designed beings, and space settlement. Short-term power concentration and technological maturity can compound into near-irreversible lock-in. He argues these dynamics make steering the near future morally urgent.
AI Snips
Chapters
Books
Transcript
Episode notes
Short-Term Actions Can Shape Deep Futures
- The view that only extinction reduction matters assumes other interventions' effects will always wash out.
- William MacAskill argues that several foreseeable events could produce persistent, predictable long-run effects.
Extinction Prevention Often Changes Who, Not Whether
- Reducing extinction risk mainly affects who occupies our corner of the cosmos, not whether it is occupied.
- MacAskill notes replacement civilizations (non-human or alien) reduce the expected gain from extinction prevention.
AI Takeover Is One Of Many Path Risks
- Preventing AI takeover only matters if an AI-directed civilization would be worse than a human-directed one.
- MacAskill argues similar path-dependent events arise from various human-driven outcomes this century.






