

The Bayesian Conspiracy
The Bayesian Conspiracy
A conversational podcast for aspiring rationalists.
Episodes
Mentioned books

29 snips
Mar 18, 2026 • 1h 42min
258 – How Effective Altruism Has Evolved
They debate whether some animal lovers should eat meat and weigh health risks, ethics, and lower-suffering seafood. They wrestle with where small donors fit amid big philanthropy and whether local, self-funded projects beat institutional giving. They probe EA culture, belonging, and alternatives like small cohorts and community networks.

Mar 1, 2026 • 1h 22min
257 – Pentagon Comes For Claude
We relive the last 48 hours of the future of humanity being wrestled over. The Pentagon wants to use Claude for comprehensive mass surveillance of Americans and autonomous kill-bots, and Anthropic says no. The Pentagon retaliates with extreme prejudice. With guest-star Matt.
LINKS
Washington Post summary
Anthropic’s response
Trump’s response
Hegseth’s unhinged lunacy
We Will Not Be Divided – Goggle and OpenAI employees open letter
Eliezer on the tech/govt war
Scott Alexander tweet
RSP comment
Opus3 Retirement
Paid Bonus content for the week – Full Video, Preshow Chat
Our Patreon, or if you prefer Our SubStack
Hey look, we have a discord! What could possibly go wrong?
We now partner with The Guild of the Rose, check them out.
LessWrong Sequence Posts Discussed in this Episode:
on hiatus, returning someday

Feb 18, 2026 • 1h 36min
256 – Writing for LLMs
We are inspired by Andrew Cutler’s Writing for AI to consider the value of writing for LLMs
LINKS
Andrew Cutler’s Writing for AI
Gwern’s Writing for LLMs
Tracing Woodgrain’s Reliable Sources
Shambaugh’s An AI Agent Published a Hit Piece on Me
Eneasz’s Stone Age Billionaire Can’t Word Good
InkHaven
LessOnline
The main purpose of the AFFINE Seminar is to give promising newcomers to AI alignment an opportunity to acquire a deep understanding of some large pieces of the problem, making them better equipped for work on the mitigation of AI existential risk.
AFFINE Alignment Seminar
Paid Bonus content for the week – Preshow chatter, Full Show Video
00:00:49 – Announcements & Feedback
00:42:15 – Writing for AI
01:23:15 – AFFINE Alignment Seminar
01:31:11 – Guild of the Rose
01:33:37 – Thank the Supporter!
Our Patreon, or if you prefer Our SubStack
Hey look, we have a discord! What could possibly go wrong?
We now partner with The Guild of the Rose, check them out.
LessWrong Sequence Posts Discussed in this Episode:
on hiatus, returning soon

44 snips
Feb 4, 2026 • 1h 35min
255 – Eneasz goes to CFAR, and Epistemically Honest Reassurance
A participant recounts an immersive CFAR workshop: hands-on practice, cohort dynamics, trigger plans, and lasting emotional shifts. The conversation explores Daystareld’s idea of epistemically honest reassurance and how to comfort others without lying. They weigh when to correct versus reassure and give truthful, empathetic phrasing that avoids false hope.

Jan 21, 2026 • 1h 36min
254 – The True Theme and Meaning of HPMOR
WSCFriedman gets to the core of what HPMOR is ACTUALLY about, and finally pinpoints why we love it so much, in his essay Harry Potter And The Methods Of Rationality Is A Disney Movie About A Serial Killer.
LINKS
Audio version of HPMOR is A Disney Movie About A Serial Killer, from AskWho
William’s blog, “As Our Days”
ACX Non-Book Review 2025 Winners Post
Just HPMOR substack, and Spotify playlist
Why the AI Water Issue Has Nothing to Do With Water (and audio version here, again from AskWho)
Money is Life
Eneasz’s post on InkHaven
Inkhaven.Blog – apply today!
Paid Bonus content for the week – Preshow chatter, Full Show Video
00:04:33 – Announcements & Feedback
00:27:24 – Eneasz’s Podcast Meta-Worries
00:28:21 – HPMOR Is A Disney Movie About A Serial Killer
01:28:17 – Guild of the Rose
01:31:18 – Thank the Supporter!
Our Patreon, or if you prefer Our SubStack
Hey look, we have a discord! What could possibly go wrong?
We now partner with The Guild of the Rose, check them out.
LessWrong Sequence Posts Discussed in this Episode:
on hiatus, returning soon

Jan 7, 2026 • 1h 44min
253 – The Seven Vicious Vices of Rationalists
A lively breakdown of Ben Pace’s seven vices of rationalists, treating useful traits turned toxic. Short takes on contrarianism, pedantry, over-explaining, social obliviousness, and stubbornness. Discussion of trust, when critique kills momentum, and real-world tradeoffs. Reflections on writing practice, show format changes, and community fundraising highlights.

Dec 31, 2025 • 12min
A Harried Meeting (audio)
A short story by Ben Pace. Original can be found here. Donate to the fundraiser here!
Harry sings karaoke here. Happy New Year.

Dec 25, 2025 • 5min
House elves are crystalized cosmic power
A short story by Prerat. Original can be found here. Merry Xmas!

Dec 24, 2025 • 4min
Christmas 2025: Couple of Quick Shoutouts
Hello and happy holiday season to you all! Eneasz is back and we’re here to say hi and give a shoutout to Skyler’s awesome annual LessWrong survey and Lighthaven’s fundraising event. See the links below for more details.
LINKS
The LessWrong survey will remain open from now until at least January 7th, 2026.
Lighthaven is once again seeking support. If you’re inclined to help, check out all of the details here.
Related, our interview with Oliver from last year.
Our Patreon, or if you prefer Our SubStack
Hey look, we have a discord! What could possibly go wrong? (also merch)
We now partner with The Guild of the Rose, check them out.
LessWrong Sequence Posts Discussed in this Episode:
on hiatus, returning soon

Dec 10, 2025 • 1h 52min
252 – The 12 Virtues of Rationality, with Alex and David
A lively tour of Yudkowsky's Twelve Virtues, unpacking curiosity, relinquishment, lightness, and evenness. They explore argument as testing, empiricism, simplicity, humility, and precision. Scholarship, flow-like “the void,” and cooperation under uncertainty also feature. Personal reflections and a pitch for the Guild of the Rose round out the conversation.


