Long Game with Greg Sadler
10 snips
Jan 27, 2026 Greg Sadler, CEO of Good Ancestors, who works on long-term policy for emerging technology risks. He discusses AI safety beyond AGI and the need for institutions to manage existential threats. Conversation covers Grok’s harmful image generation, non-consensual deepfakes, ChatGPT Health risks, Bandcamp’s AI music ban, biosecurity concerns and open-weight model misuse.
AI Snips
Chapters
Books
Transcript
Episode notes
Ban Non-Consensual Deepfakes
- Enact laws prohibiting non-consensual AI deepfakes to establish a clear norm against misuse.
- Dan Stinton argued consent-based prohibition would reduce harmful deepfake creation.
General Models Pack Hidden Harms
- Greg emphasised that models are being built to be general-purpose, which bundles harmful and helpful capabilities together.
- He argued regulators and industry must confront why we're training models that can produce CSAM and other dangerous outputs.
ChatGPT Targets Health
- OpenAI launched ChatGPT Health to connect personal health data and offer tailored responses.
- The product claims isolation and non-training of health data but raises regulation and safety questions.







