Burning Platforms

Long Game with Greg Sadler

10 snips
Jan 27, 2026
Greg Sadler, CEO of Good Ancestors, who works on long-term policy for emerging technology risks. He discusses AI safety beyond AGI and the need for institutions to manage existential threats. Conversation covers Grok’s harmful image generation, non-consensual deepfakes, ChatGPT Health risks, Bandcamp’s AI music ban, biosecurity concerns and open-weight model misuse.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ADVICE

Ban Non-Consensual Deepfakes

  • Enact laws prohibiting non-consensual AI deepfakes to establish a clear norm against misuse.
  • Dan Stinton argued consent-based prohibition would reduce harmful deepfake creation.
INSIGHT

General Models Pack Hidden Harms

  • Greg emphasised that models are being built to be general-purpose, which bundles harmful and helpful capabilities together.
  • He argued regulators and industry must confront why we're training models that can produce CSAM and other dangerous outputs.
INSIGHT

ChatGPT Targets Health

  • OpenAI launched ChatGPT Health to connect personal health data and offer tailored responses.
  • The product claims isolation and non-training of health data but raises regulation and safety questions.
Get the Snipd Podcast app to discover more snips from this episode
Get the app