Radio Davos

The day after AGI: Two 'rock stars' of AI on what it will mean for humanity

12 snips
Feb 12, 2026
Demis Hassabis, neuroscientist-turned-AI leader at DeepMind, and Dario Amodei, AI safety-focused CEO of Anthropic, discuss AGI timelines, self-improving models and coding automation. They debate scientific creativity limits, model-driven automation, job impacts for junior roles, geopolitical chip controls, and the balance of huge benefits versus urgent risks.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Closing The Loop Means Autonomous Learning

  • Closing the loop means agents act, observe outcomes, and adapt over time rather than just generating outputs.
  • That autonomy moves systems toward agency and introduces new safety and governance risks.
ANECDOTE

Open-Source Personal Agents Example

  • Open-source agents like OpenTrO allow users to grant access to personal data and tweak behavior.
  • Dario warns this increases attack surfaces and cybersecurity risks for users granting system access.
ADVICE

Prepare For Fast Automation-Driven Progress

  • Expect coding and research automation to accelerate model development and possibly close self-improvement loops.
  • Prepare for rapid progress by focusing safety work on the parts that speed up fastest.
Get the Snipd Podcast app to discover more snips from this episode
Get the app