
The Existential Hope Podcast Andrew Critch on what AGI might look like in practice
Dec 11, 2025
Andrew Critch, an AI safety researcher and founder of innovative tools like NotADoctor.ai, dives into the crucial question of what we will do with AGI. He argues that it’s not just about its arrival but our choices that will dictate its impact. Critch reveals that AGI may be friendly and suggests focusing on shared moral values over perfectionism. He underlines the importance of creating helpful AI products today while advocating for cultural change through practical solutions rather than endless debate.
AI Snips
Chapters
Transcript
Episode notes
Harm Versus Neglect Matters
- Distinguish harm from neglect: harming crosses a boundary and is morally different from failing to help.
- Seeking harmlessness should focus on avoiding boundary-crossing harms, not always maximizing benefit.
Ship Products To Change Culture
- Build practical products that solve real problems rather than only debating visions or narratives.
- Use multi-agent and synthesis tools to compare model answers and produce thorough, less-biased conclusions.
Automate Security To Reduce Vulnerabilities
- Prioritize automated computer security checks to reduce technical debt and cyber vulnerabilities.
- Secure code lowers risks from cyberwarfare and early AI takeover vectors.
