Robinson's Podcast cover image

275 - Nate Soares: AI Will Kill Us All If We Don’t Change Course

Robinson's Podcast

00:00

Interpretability: why we need to see inside models

Nate praises interpretability work and argues that black-box models make alignment much harder and riskier.

Play episode from 18:01
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app