
Google DeepMind: The Podcast The Arrival of AGI with Shane Legg (co-founder of DeepMind)
512 snips
Dec 11, 2025 Join Shane Legg, co-founder and Chief AGI Scientist at Google DeepMind, as he tackles the intricate world of artificial general intelligence. He discusses the current capabilities of AI and their limitations compared to human cognition. Shane explores various levels of AGI, from minimal to superintelligence, and emphasizes the need for ethical considerations and robust testing methods. Furthermore, he predicts significant societal shifts and job automation trends, while suggesting a 50% chance of minimal AGI by 2028. This conversation is a deep dive into the future of intelligence!
AI Snips
Chapters
Transcript
Episode notes
How The Term 'AGI' Took Off
- Shane recounts coining 'AGI' while discussing generality with Ben Goertzel and later finding an earlier 1997 usage.
- The term shifted from a field descriptor to a public artifact category without a fixed definition.
Map Capability Distributions
- Capability distributions matter: systems may be superhuman in some areas and fragile in others.
- Understanding that distribution is crucial to assess opportunities and risks.
Monitor Chain-Of-Thought Reasoning
- Deploy chain-of-thought monitoring (system two safety) so AIs can reason about ethical decisions transparently.
- Use inspectable reasoning to check intentions, not just outcomes.

