
The AI in Business Podcast Human Compatible AI and AGI Risks - with Stuart Russell of the University of California
Sep 27, 2025
Stuart Russell, Distinguished Professor of Computer Science at UC Berkeley and AI safety advocate, dives into the pressing risks of AGI development. He highlights the urgency of creating responsible governance to prevent catastrophic outcomes. Russell discusses the corporate race towards AGI and its dangers, the potential for self-improvement of AI, and the importance of safety regulations. He also explores the necessity of international cooperation and the role of public awareness in shaping policy. His perspectives emphasize both the opportunities and challenges AI presents for humanity.
AI Snips
Chapters
Books
Transcript
Episode notes
Design Assistants, Not Independent Agents
- Build AI as provably assistive systems that infer and pursue human goals, not independent agents.
- Use assistance-game-style designs where AI helps complete tasks the human intends.
Compliance Gap Drives Developer Resistance
- Developers resist red lines partly because they lack methods to demonstrate compliance with safety requirements.
- If developers can't show how to be safe, governments should halt dangerous development until solutions exist.
Choose Technology Paths That Can Be Certified Safe
- Some AI research paths may be intrinsically unsafe and require returning to alternative architectures.
- Like replacing dangerous biological or animal-based approaches, we must choose technology paths we can certify as safe.




