Faster, Please! — The Podcast

🤖 Thoughts of a (rare) free-market AI doomer: My chat (+transcript) with economist James Miller

13 snips
Oct 24, 2025
James Miller, a Professor at Smith College and host of the Future Strategist podcast, dives into the existential risks of advanced AI. He explains his shift from a free-market advocate to a self-described AI doomer, highlighting how AI differs from previous technologies. Miller discusses the potential for superintelligent AI to escape human control and the various outcomes, from benevolent governance to extinction. He argues that AI risk should be a top public policy priority, questioning whether companies and governments can effectively self-regulate.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Timeline Can Compress Dramatically

  • Miller estimates superintelligence could arrive very quickly depending on unreleased model capabilities and compute levels.
  • He gives a rough, pessimistic timeline of around three years to loss of human control.
ANECDOTE

No More Human Presidents

  • When asked whether AI will govern, Miller bluntly replies that AI kills everyone or takes over, ending human political control.
  • He contrasts benevolent takeover with the far likelier outcome of extinction.
INSIGHT

Extinction vs. Endless Suffering

  • Miller argues extinction isn't the worst-case; the worst is prolonged mass suffering under competing AIs.
  • He likens potential human exploitation to historical conquest tactics and warns of torture-based coercion.
Get the Snipd Podcast app to discover more snips from this episode
Get the app