Lenny's Podcast: Product | Career | Growth

Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann

4495 snips
Jul 20, 2025
Ben Mann, co-founder of Anthropic and former GPT-3 architect at OpenAI, dives into the world of AI's rapid growth and ethical dilemmas. He reveals the challenges of attracting top talent amid fierce competition, spotlighting Meta's staggering offers. Ben predicts that AGI may emerge as soon as 2027-2028, but warns of a looming 20% unemployment rate. He shares his concerns over AI's existential threats and why a focus on safety and human alignment is crucial, while also teaching his kids alternative skills instead of traditional academics.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes

Transparency Builds AI Safety Trust

  • AI risks range from minor harms to extinction-level events, requiring transparency and regulation.
  • Anthropic openly publishes model failures to build trust with policymakers and improve safety.

Small Risk, Huge Stakes

  • Even a small chance of AI existential risk warrants utmost caution and effort.
  • Aligning superintelligence must be addressed well ahead of its arrival.

Robot Intelligence Approaching Soon

  • Hardware capabilities like humanoid robots are ready; intelligence is the remaining challenge.
  • Robot intelligence combined with AI advancements may arrive soon, making physical risks imminent.
Get the Snipd Podcast app to discover more snips from this episode
Get the app