80,000 Hours Podcast cover image

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

80,000 Hours Podcast

00:00

Adversarial Solutions and Out‑of‑Distribution Risks

Max compares adversarial examples to goal optimization and why broader distributions reveal bizarre optimal solutions.

Play episode from 01:00:26
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app