Doom Debates!

Justin Helps (⁨@Primer on YouTube) is Worried about AI Takeover

15 snips
Apr 30, 2026
Justin Helps, science educator behind Primer and physics/materials grad turned AI-safety communicator. He explains why he assigns 70% p(doom) by 2100. They debate AGI timelines, whether current models can scale to world-shaping agency, the risks of fast digital copies, and whether pauses or policy can curb catastrophic outcomes.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Can Versus Will Is The Core Of PDOOM

  • Justin Helps separates AI risk into two questions: can an AI kill us and will it want to.
  • He assigns low short-term 'can' probability (e.g., ~5% by 2040) but high long-term 'will' probability, leading to 70% PDOOM by 2100.
INSIGHT

Physical Experiments Are A Real Timeline Brake

  • Justin doubts AIs will deduce complex physical tech like nanotech a priori and emphasizes experimental constraints.
  • He argues labs are messy and physical experiments slow timelines, so breakthroughs requiring experiments likely extend doom timing.
INSIGHT

Low Short-Term PDOOM Still Requires Action

  • Justin emphasizes that different PDOOM numbers still demand similar actions; reducing short-term risk is urgent regardless of precise odds.
  • He argues even 5–10% by 2040 is unacceptably high and motivates policy and alignment efforts.
Get the Snipd Podcast app to discover more snips from this episode
Get the app