
LessWrong (Curated & Popular) "Contradict my take on OpenPhil’s past AI beliefs" by Eliezer Yudkowsky
Dec 21, 2025
Eliezer Yudkowsky, an influential AI researcher and co-founder of MIRI, dives deep into critiques of the Effective Altruism community's past AI beliefs. He argues that Open Philanthropy misjudged AI timelines and risks, highlighting Ajeya Cotra's 30-year AGI estimate as a major misstep. Yudkowsky questions the organization's funding decisions and whether dissenting views had any real impact. He openly invites evidence that could challenge his perspective, emphasizing his commitment to truth and adjusting his views if proven wrong.
AI Snips
Chapters
Transcript
Episode notes
Dispute Over OpenPhil's AI Timelines
- Eliezer Yudkowsky argues OpenPhil and Oxford EA held and promoted flawed AI-timeline and ruin-probability views that shaped funding priorities.
- He requests insiders to show evidence that dissenting, more alarmist views controlled funding or public promotion before the GPT moment.
Personal Reported Interactions With OpenPhil
- Yudkowsky recalls being told OpenPhil personnel treated Ajeya Cotra and Joe Carlsmith as representative of OpenPhil's views that influenced funding decisions.
- He describes being verbally told those views would determine MIRI's funding prospects if they wanted support.
Dissent Isn't The Same As Influence
- Yudkowsky distinguishes tolerance of internal dissent from actual institutional power to shape funding and public messaging.
- He emphasizes organizational psychology over mere availability of dissenting views when assessing responsibility.

