Manifold

AI DOOM: Jesse Hoogland of Timaeus

41 snips
Feb 26, 2026
Jesse Hoogland, AI safety researcher and co-founder of Timaeus, applies physics-inspired theory to neural nets. He discusses using Singular Learning Theory to link loss landscapes and internals. He talks about fast capability growth, diverse paths to catastrophic risk, tensions between theory and empirical tinkering, funding and timelines, and what scientific understanding could buy us for safer AI.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ADVICE

Analyze Loss Surfaces Like Physical Energy Landscapes

  • Use physics-style thinking (loss surfaces, free energy analogies) to interpret deep nets because optimization landscapes mirror physical systems.
  • Jesse recommends leveraging loss-surface analysis to infer how models generalize and why internals form.
INSIGHT

The Two Pillars Of AI Safety Debate

  • AI safety splits into two poles: theoretical 'start-at-the-end' alignment and prosaic empirical alignment.
  • Jesse positions himself between them, arguing timelines and discrete capability jumps are the main crux determining which approach matters most.
INSIGHT

AI Safety Funding Has Grown But Stays Concentrated

  • Funding for AI safety has diversified but remains concentrated; Open Philanthropy and Survival and Flourishing Fund are major backers.
  • Jesse estimates total annual funding (including labs) on the order of a few hundred million up to half a billion dollars.
Get the Snipd Podcast app to discover more snips from this episode
Get the app