

LessWrong (30+ Karma)
LessWrong
Audio narrations of LessWrong posts.
Episodes
Mentioned books

Apr 2, 2026 • 32min
“The Corner-Stone” by Benquo
A brisk look at who actually benefits from National Merit status and which colleges use it as a recruiting tool. A survey of how credential pipelines select for compliance and audience-pleasing performance over independent judgment. Stories and data on geographic and economic mismatches that leave high achievers undermatched. A historical lens tracing meritocratic ideas from wartime selection to today’s managerial credential class.

Apr 2, 2026 • 8min
“Systematically dismantle the AI compute supply chain.” by David Scott Krueger (formerly: capybaralet)
A critique of a documentary's narrow solution framing leads into a third way: removing advanced AI compute from the equation. The conversation maps the global compute supply chain and pinpoints chokepoints from fabs to data centers. Concrete strategies are proposed for shutting down or buying out production and for international monitoring to prevent covert restarts.

Apr 2, 2026 • 4min
“The quest for general intelligence is hitting a wall” by Sean Herrington
A survey of dramatic recent wins in math and coding alongside stubborn failures in symbolic reasoning. A look at opacity and why models often hallucinate or pursue unintended goals. Doubts about near-term architectural breakthroughs and concerns about scaling limits, context-window gaps, and risks from jailbreaks and shallow social alignment.

Apr 2, 2026 • 11min
“Intelligence Dissolves Privacy” by Vaniver
A discussion about how changing technological options reshape what societies regard as reasonable privacy. Topics include how dropping costs make mass surveillance practical and how sensors plus models can infer intimate signals. The conversation highlights legal limits, risks to minorities when traits are inferred, tradeoffs between safety and misuse, and the need for norms, laws, and oversight.

Apr 2, 2026 • 25min
“Anthropic’s Pause is the Most Expensive Alarm in Corporate History” by Ruby
A company halts training of next‑gen AI models, pausing resource‑heavy development with no restart timeline. Markets tumble as valuations and big tech stocks react. Leaks and internal reports hint at a powerful unreleased model and safety concerns. Policy players and nations scramble to respond, sparking debates over regulation, industry motives, and the future shape of the AI race.

Apr 2, 2026 • 18min
“I’m Suing Anthropic for Unauthorized Use of My Personality” by Linch
A writer discovers how AI systems can infer full personas from cultural training signals. They compare a model’s described traits to their own and find striking overlaps. The conversation turns to doubt about model understanding and the blurred line between personhood and pattern. It culminates in a decision to pursue legal action over alleged unauthorized use of personality.

Apr 2, 2026 • 9min
“Orders of magnitude: use semitones, not decibels” by Oliver Sourbut
A playful trick for doing mental logarithms using musical intuition. How octaves and semitones encode frequency ratios and map scale to powers of two. The link between harmonic integer ratios and the twelve-note chromatic scale. Practical conversions between ratios and semitones and a comparison of semitones versus decibels.

Apr 2, 2026 • 6min
“Dying with Whimsy” by NickyP
To me it feels pretty emotionally clear we are nearing the end-times with AI. That in 1-4 years[1] things will be radically transformed, that at least one the big AI labs will become autonomous research organizations working on developing the next version on AI, perhaps with some narrow guidance of humans in oversight or acquisition of more resources until robotics is solved too. And i believe there will be some nice benefits at first with this, with the AI organizations providing many goods and services in exchange for money, to raise capital so that the self-improvement resource acquisition loop can continue. But I’m not sure how it will ultimately turn out. Declaring risk of extinction-level events less than 10% seems overconfident. Yet, declaring the risks to be >90% also seems overconfident. But I generally remain quite uncertain about which factors will dominate. Maybe AIs will remain friendly and for decision theory reasons continue put some fraction of resources to look after us to some extent, as a signal that future entities should do the same for them. Maybe the loop of capital acquisition is so brutal and molochian that models that doom keeps on winning. And people have been [...] The original text contained 4 footnotes which were omitted from this narration. ---
First published:
April 1st, 2026
Source:
https://www.lesswrong.com/posts/3uRGPDrucg9RLLcp5/dying-with-whimsy
---
Narrated by TYPE III AUDIO.

Apr 1, 2026 • 18min
“AI for AI for Epistemics” by owencb, Lukas Finnveden
We feel conscious that rapid AI progress could transform all sorts of cause areas. But we haven’t previously analysed what this means for AI for epistemics, a field close to our hearts. In this article, we attempt to rectify this oversight. Summary AI-powered tools and services that help people figure out what's true (“AI for epistemics”) could matter a lot. As R&D is increasingly automated, AI systems will play a larger role in the process of developing such AI-based epistemic tools. This has important implications. Whoever is willing to devote sufficient compute will be able to build strong versions of the tools, quickly. Eventually, the hard part won’t be building useful systems, but making sure people trust the right ones, and making sure that they are truth-tracking even in domains where that's hard to verify. We can do some things now to prepare. Incumbency effects mean that shaping the early versions for the better could have persistent benefits. Helping build appetite among socially motivated actors with deep pockets could enable the benefits to come online sooner, and in safer hands. And in some cases, we can identify particular things that seem likely to be bottlenecks later, and work [...] ---Outline:(00:26) Summary(01:29) Background: AI for epistemics(02:20) The shift in what drives AI-for-epistemics progress(05:54) What this unlocks(06:28) Risks from rapid progress in AI for epistemics(07:09) Epistemic misalignment(08:51) Trust lock-in(09:44) Other risks(10:14) Interventions(10:27) Build appetite for epistemics R&D among well-resourced actors(10:59) Anticipate future data needs(12:21) Figure out what could ground us against epistemic misalignment(12:58) Drive early adoption where adoption is the key bottleneck(13:39) Support open and auditable epistemic infrastructure(14:17) Support development in incentive-compatible places(15:14) Examples(15:17) Forecasting(15:56) Misinformation tracking(16:41) Automating conceptual research The original text contained 3 footnotes which were omitted from this narration. ---
First published:
April 1st, 2026
Source:
https://www.lesswrong.com/posts/K7tG6Fuh6pkDGHAGx/ai-for-ai-for-epistemics
---
Narrated by TYPE III AUDIO.

Apr 1, 2026 • 9min
“Announcing Doublehaven with Reflections on Humour” by J Bostock
Inkhaven is a writers’ retreat, well, really it's a bloggers’ retreat. In the Lighthaven campus, Berkeley, a couple dozen bloggers get together to complete an almost insurmountable challenge for us mere mortals. Post one blogpost every single day for a whole month. I say ‘insurmountable’ but in fact they all succeeded last time, although apparently it was not uncommon for them to claw success from the jaws of defeat at 11:45 pm each night. I look at this and I feel the same way that traditionalists feel when they see Millennials scared to use the phone, or Gen Zs unable to go outside. Our (blogosphere) ancestors used to blog seventy times per day! Great Yudkowsky used to go to war (with the methods of rationality)! Moldbug and Alexander were gunning each other down (with devastating couterarguments) over breakfast! That's why I’m going to be doing Doublehaven. Two blogposts per day. No “advice” or “tips” on “writing well”. No full-time live-in retreat (I’m not that rich). In fact, I also need to finish writing my PhD thesis and an entire paper this month. Why? I want to give people the permission to be ambitious. Yes, some people struggle with writer's [...] ---Outline:(02:14) Honourable Mentions(02:56) #5: That one Yudkowsky Rant Tweet(03:42) #4: The Special LessWrong Events(04:30) #3: Came in fluffer Sankey Diagram(05:27) #2: The Anthropic Responsible Scaling Policy(06:32) #1: The Shoggoth(07:19) Discussion ---
First published:
April 1st, 2026
Source:
https://www.lesswrong.com/posts/Qczwwgy6kr2p6Tgg3/announcing-doublehaven-with-reflections-on-humour
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.


