

The Bayesian Conspiracy
The Bayesian Conspiracy
A conversational podcast for aspiring rationalists.
Episodes
Mentioned books

Apr 29, 2026 • 1h 36min
261 – The Button That Almost Nuked Glosso Got Eneasz Instead
They revive the red-button/blue-button thought experiment and dissect why a social app turned it into a real, emotional test. They explore how making hypotheticals personal strains relationships and fuels online tribal celebration. They debate mapping game-theory puzzles onto real-world ethics and when it is reasonable to refuse unmoored thought experiments.

Apr 15, 2026 • 1h 30min
260 – The Right to be Wrong About Doom
They debate how far the right to be wrong should extend, especially around controversial AI doom claims. Tensions from a Twitter kerfuffle and community norms about shaming and tolerance come up. They weigh comparisons to historical moral culpability, discuss violence as a tactic, and stress law, policy, and global coordination as responses to systemic risks.

8 snips
Apr 1, 2026 • 1h 53min
259 – Chill Out on AI in 2028?
Casual debate about whether we should 'chill out' on AI alarmism in 2028, exploring tone, persuasion, and communication pitfalls. They weigh capabilities trajectories, singleton versus multipolar risk views, and likely real-world warning shots. Conversations cover human-guided AI creativity, an UnSlop fiction contest, fundraising dynamics, sustainable motivation for safety work, and plans to revive a rationalist learning community.

29 snips
Mar 18, 2026 • 1h 42min
258 – How Effective Altruism Has Evolved
They debate whether some animal lovers should eat meat and weigh health risks, ethics, and lower-suffering seafood. They wrestle with where small donors fit amid big philanthropy and whether local, self-funded projects beat institutional giving. They probe EA culture, belonging, and alternatives like small cohorts and community networks.

Mar 1, 2026 • 1h 22min
257 – Pentagon Comes For Claude
We relive the last 48 hours of the future of humanity being wrestled over. The Pentagon wants to use Claude for comprehensive mass surveillance of Americans and autonomous kill-bots, and Anthropic says no. The Pentagon retaliates with extreme prejudice. With guest-star Matt.
LINKS
Washington Post summary
Anthropic’s response
Trump’s response
Hegseth’s unhinged lunacy
We Will Not Be Divided – Goggle and OpenAI employees open letter
Eliezer on the tech/govt war
Scott Alexander tweet
RSP comment
Opus3 Retirement
Paid Bonus content for the week – Full Video, Preshow Chat
Our Patreon, or if you prefer Our SubStack
Hey look, we have a discord! What could possibly go wrong?
We now partner with The Guild of the Rose, check them out.
LessWrong Sequence Posts Discussed in this Episode:
on hiatus, returning someday

Feb 18, 2026 • 1h 36min
256 – Writing for LLMs
We are inspired by Andrew Cutler’s Writing for AI to consider the value of writing for LLMs
LINKS
Andrew Cutler’s Writing for AI
Gwern’s Writing for LLMs
Tracing Woodgrain’s Reliable Sources
Shambaugh’s An AI Agent Published a Hit Piece on Me
Eneasz’s Stone Age Billionaire Can’t Word Good
InkHaven
LessOnline
The main purpose of the AFFINE Seminar is to give promising newcomers to AI alignment an opportunity to acquire a deep understanding of some large pieces of the problem, making them better equipped for work on the mitigation of AI existential risk.
AFFINE Alignment Seminar
Paid Bonus content for the week – Preshow chatter, Full Show Video
00:00:49 – Announcements & Feedback
00:42:15 – Writing for AI
01:23:15 – AFFINE Alignment Seminar
01:31:11 – Guild of the Rose
01:33:37 – Thank the Supporter!
Our Patreon, or if you prefer Our SubStack
Hey look, we have a discord! What could possibly go wrong?
We now partner with The Guild of the Rose, check them out.
LessWrong Sequence Posts Discussed in this Episode:
on hiatus, returning soon

44 snips
Feb 4, 2026 • 1h 35min
255 – Eneasz goes to CFAR, and Epistemically Honest Reassurance
A participant recounts an immersive CFAR workshop: hands-on practice, cohort dynamics, trigger plans, and lasting emotional shifts. The conversation explores Daystareld’s idea of epistemically honest reassurance and how to comfort others without lying. They weigh when to correct versus reassure and give truthful, empathetic phrasing that avoids false hope.

Jan 21, 2026 • 1h 36min
254 – The True Theme and Meaning of HPMOR
WSCFriedman gets to the core of what HPMOR is ACTUALLY about, and finally pinpoints why we love it so much, in his essay Harry Potter And The Methods Of Rationality Is A Disney Movie About A Serial Killer.
LINKS
Audio version of HPMOR is A Disney Movie About A Serial Killer, from AskWho
William’s blog, “As Our Days”
ACX Non-Book Review 2025 Winners Post
Just HPMOR substack, and Spotify playlist
Why the AI Water Issue Has Nothing to Do With Water (and audio version here, again from AskWho)
Money is Life
Eneasz’s post on InkHaven
Inkhaven.Blog – apply today!
Paid Bonus content for the week – Preshow chatter, Full Show Video
00:04:33 – Announcements & Feedback
00:27:24 – Eneasz’s Podcast Meta-Worries
00:28:21 – HPMOR Is A Disney Movie About A Serial Killer
01:28:17 – Guild of the Rose
01:31:18 – Thank the Supporter!
Our Patreon, or if you prefer Our SubStack
Hey look, we have a discord! What could possibly go wrong?
We now partner with The Guild of the Rose, check them out.
LessWrong Sequence Posts Discussed in this Episode:
on hiatus, returning soon

Jan 7, 2026 • 1h 44min
253 – The Seven Vicious Vices of Rationalists
A lively breakdown of Ben Pace’s seven vices of rationalists, treating useful traits turned toxic. Short takes on contrarianism, pedantry, over-explaining, social obliviousness, and stubbornness. Discussion of trust, when critique kills momentum, and real-world tradeoffs. Reflections on writing practice, show format changes, and community fundraising highlights.

Dec 31, 2025 • 12min
A Harried Meeting (audio)
A short story by Ben Pace. Original can be found here. Donate to the fundraiser here!
Harry sings karaoke here. Happy New Year.


