LessWrong (Curated & Popular) cover image

"Contradict my take on OpenPhil’s past AI beliefs" by Eliezer Yudkowsky

LessWrong (Curated & Popular)

00:00

Example: Carl Smith on Power-Seeking Risk

Eliezer describes Joe Carlsmith's 5% estimate for power-seeking AI ruin as a flawed multi-stage inference example.

Play episode from 01:16
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app