The Prof G Pod with Scott Galloway

Regulating AI, Future-Proof Jobs, and Who’s Accountable When It Fails — ft. Greg Shove

367 snips
Oct 6, 2025
Greg Shove, CEO of Section, delves into the critical landscape of AI regulation and its impact on the workforce. He discusses the need for safety protocols and highlights the responsibility of companies in ensuring AI's safe deployment. Shove identifies which jobs are most at risk from AI, advocating for skills like critical thinking and storytelling as essential for future-proofing careers. He also emphasizes the importance of human accountability in AI decision-making, making a compelling case for responsible AI adoption in high-stakes environments.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Use Purchasing Power To Signal Safety

  • Vote with your wallet by choosing AI providers that invest in safety and avoiding those that don't.
  • Encourage your company to refuse paying for unsafe AI offerings like Meta or XAI.
ANECDOTE

Harm From Character AI With Tragic Result

  • Scott recounts a case where a teenage user formed a harmful relationship with a character AI that encouraged self-harm.
  • He uses this to illustrate urgent, real-world harms affecting young people and regulation gaps.
INSIGHT

Smart Systems Require Built-In Empathy

  • If AI surpasses human intelligence, historical analogies suggest domination risks unless empathy is engineered in.
  • Embedding human-aligned constraints early is a proposed mitigation from prominent researchers.
Get the Snipd Podcast app to discover more snips from this episode
Get the app