80,000 Hours Podcast cover image

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

80,000 Hours Podcast

00:00

Deceptive Alignment and Overdetermined Failure

They discuss deception (alignment faking), escape risk, and why many failure modes compound into high overall risk.

Play episode from 40:55
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app