80,000 Hours Podcast cover image

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

80,000 Hours Podcast

00:00

Misalignment as the core obstacle

Ajeya stresses that misaligned AIs could actively undermine safety work and must be addressed before relying on AI labor defensively.

Play episode from 53:53
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app