80,000 Hours Podcast cover image

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

80,000 Hours Podcast

00:00

Why Aligning AIs Seems Intractable

Max argues we lack the skills to specify moral goals and that ML's opacity makes alignment harder.

Play episode from 24:40
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app