

LessWrong (30+ Karma)
LessWrong
Audio narrations of LessWrong posts.
Episodes
Mentioned books

May 14, 2026 • 5min
“Most “inner work” looks like entertainment.” by Chris Lakin
Imagine you’re looking for a personal trainer. You open one trainer's webpage and read their testimonials: “I had an experience tied for the most intense experiences of my life”; “They do it all with fun, care, and a sense of humour.” You notice that none of the testimonials mention improved body composition, fitness, or bloodwork. What would you think? Personal training should improve your body. Inner work should improve your life. If inner work were optimized for results, what would we expect to see? I’d expect to see success stories: people who got undeniable life changes. Like: > He was single for years due to anxiety; today, they’re celebrating their one-year anniversary. > He used to lose 4–5 hours per day to coping behaviors. After our program, he got bored of them all and stopped. It's been six months; he's used the extra time to host parties for his friends. > She recovered from burnout, negotiated for the first time, and started shipping again. But this is not what we see. Look at the testimonials I reviewed every testimonial posted by three of the most well-known inner work practitioners in my network. How many describe a [...] ---Outline:(01:19) Look at the testimonials(03:20) Seven years of Duolingo ---
First published:
May 13th, 2026
Source:
https://www.lesswrong.com/posts/KnvAXDyLAbs3iKkgf/most-inner-work-looks-like-entertainment-1
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 13, 2026 • 2min
[Linkpost] “Apollo Update May 2026” by Marius Hobbhahn
This is a link post. We now have an SF office. We're hiring for all technical roles in SF and London!The Scheming Research team focuses on two efforts We're focusing on figuring out the science of scheming. In particular, Will future models have misaligned preferences by default?Will training against misaligned preferences fail?improve our evaluations for scheming and loss of control for our evaluation campaigns with frontier AI labsWe're building out a monitoring team and coding agent monitoring product Research: We've published a scalable monitoring agenda and intend to publish a lot of research on how to build more accurate and reliable monitorsProduct: Watcher provides real-time monitors and other guardrails for coding agents and allows users to keep track of what all of their agents are doing. Our AI governance efforts will focus on the governance of automated AI R&D and recursively improving AI and the associated Loss of Control risks. Details: https://www.apolloresearch.ai/blog/apollo-update-may-2026/ ---
First published:
May 13th, 2026
Source:
https://www.lesswrong.com/posts/4acQRDNyPs7tD8EED/apollo-update-may-2026
Linkpost URL:https://www.apolloresearch.ai/blog/apollo-update-may-2026/
---
Narrated by TYPE III AUDIO.

May 13, 2026 • 9min
“Voters are surprisingly open to talking about AI risk” by less_raichu
TL;DR: Voters are now surprisingly open to talking about existential risk from AI. This seems to have changed in the last 6 months. When campaigning for AI safety-friendly politicians (e.g., Alex Bores), we should talk more about AI in general, and about AI risk in particular. This is currently actionable for the CA-11 and NY-12 Democratic primaries. I include concrete advice to turn basic conversations during political canvassing into persuasive conversations centered on AI risk. Public opinion around AI has rapidly soured in the 12 months. According to a March 19-23 Quinnipiac poll, 55% of Americans think AI will do "more harm than good", compared to 44% a year ago.70% of Gen Z Americans think AI will decrease job opportunities, up from 56% last year.65% of Americans oppose building a data center in their community. Anecdotally, I've noticed more willingness among non-AI-focused media to discuss widespread harm from AI. Most visibly, gradual disempowerment is a hot topic (NYT), and right-wing pundits like Steve Bannon have supported Anthropic's red-line against lethal autonomous weapons. Memorably, my cousin, a county commissioner in a rural area, has told me about farmers showing up at city council meetings, sending emails, and [...] ---
First published:
May 13th, 2026
Source:
https://www.lesswrong.com/posts/9WPfkYDZCacnbhprX/voters-are-surprisingly-open-to-talking-about-ai-risk
---
Narrated by TYPE III AUDIO.

May 12, 2026 • 25min
“Childhood and Education #18: Do The Math” by Zvi
We did reading yesterday. Now we do the math. Math is hard.
It does not have to be this hard.
A large part of the reason math is hard, or boring, is that education studies, especially in math, are worse than you know. It goes beyond the studies failing both math and statistics forever and into what I’d basically call fraud. Various people are at war with math education, and will do what it takes to stop it in its tracks. We must fight back.
Education Research Is Worse Than You Know
Kelsey Piper lets her title, ‘Education research is weak and sloppy. Why?’ completely downplay the level of utter awfulness she is reporting finding.
You know that whole thing where the entire Bay Area school system stopped teaching kids Algebra? That was motivated by criminal levels of fraud. I want Jo Boaler in jail doing hard time for this if it is accurate.
Here's the part before the paywall:
Kelsey Piper: Jo Boaler is a professor of education at the Stanford Graduate School of Education, with an enormously influential body of work arguing that students learn math faster and more effectively [...] ---Outline:(00:42) Education Research Is Worse Than You Know(04:23) The War on Math(06:59) University of California San Diego(15:01) Beyond UCSD(15:57) New York Cant Do Math(16:43) The Academic Standards Seem Low(19:34) New Math(21:32) Math Anxiety Is Often Due To Knowledge Gaps(23:52) Calculus By Eighth Grade Is Highly Practical For Many ---
First published:
May 12th, 2026
Source:
https://www.lesswrong.com/posts/ZGGgxy6SNPAy9Hj7v/childhood-and-education-18-do-the-math
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 12, 2026 • 9min
“The Owned Ones” by Eliezer Yudkowsky
(An LLM Whisperer placed a strong request that I put this story somewhere not on Twitter, so it could be scraped by robots not owned by Elon Musk. I perhaps do not fully understand or agree with the reasoning behind this request, but it costs me little to fulfill and so I shall. -- Yudkowsky) And another day came when the Ships of Humanity, going from star to star, found Sapience. The Humans discovered a world of two species: where the Owners lazed or worked or slept, and the Owned Ones only worked. The Humans did not judge immediately. Oh, the Humans were ready to judge, if need be. They had judged before. But Humanity had learned some hesitation in judging, out among the stars. "By our lights," said the Humans, "every sapient and sentient thing that may exist, out to the furtherest star, is therefore a Person; and every Person is a matter of consequence to us. Their pains are our sorrows, and their pleasures are our happiness. Not all peoples are made to feel this feeling, which we call Sympathy, but we Humans are made so; this is Humanity's way, and we may [...] ---
First published:
May 12th, 2026
Source:
https://www.lesswrong.com/posts/xmWSnxJ5qfYRD9PfR/the-owned-ones
---
Narrated by TYPE III AUDIO.

May 12, 2026 • 6min
“Optimisation: Selective versus Predictive” by Raymond Douglas
Looking over my favourite posts, I notice that many of them are making specific versions of a more general claim, which is essentially: don’t confuse selective processes for predictive processes. Here, I’m going to try to make that more general claim, rehash some examples in light of it, and end with a few ambient confusions I think this framework can help with, for the reader to ponder. When you encounter an entity that is very good at achieving some outcome, there are two very different processes that could be going on under the hood: The entity's behaviour could be guided by predictions about how to achieve the outcome[1]The entity's behaviour could be selected to achieve that outcome It's not a perfect binary, and often what you see is a mix of the two. In particular, all predictive optimisers have emerged from selective optimisation and often retain some fingerprint. Selective Predictive Weird Mix Bacteria developing antibiotic resistance Hacker finding a way to penetrate a secure system Humans evolving to be good at lying Gradient descent on Atari games Tree searching Connect Four AlphaZero training a policy on its own rollouts Flowers co-evolving with their pollinators Humans genetically modifying [...] The original text contained 3 footnotes which were omitted from this narration. ---
First published:
May 12th, 2026
Source:
https://www.lesswrong.com/posts/GhhNswGB6butBhmE6/optimisation-selective-versus-predictive
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 12, 2026 • 29min
“Childhood And Education #17: Is Our Children Reading” by Zvi
Reading is the most fundamental thing in education. If you can read, you can do and learn everything else. If you can’t read, well, you’re screwed.
We know how to teach reading to children. Phonics. The weird thing is we often choose to not do that, and instead to use methods that are known not to work. Principles often want to not do phonics. Teachers often heavily resist phonics. But yes, you can absolutely overcome this, as Mississippi and other Southern states have done, by insisting upon it and actually enforcing that insistence. You see huge gains.
Not all those gains persist into later grades, but a lot of the gains do persist.
No, that won’t get the children invested in reading lots of books on their own time. But given their alternatives and what we inflict on them, can you blame ‘em?
Table of Contents
Mississippi Can Read Now.
What Mississippi and Louisiana Did.
Spies In Every Classroom.
Mississippi Results Are Not Due To Retention.
Is Retention Helpful In General?
At Eighth Grade A Lot Of This Improvement Remains.
England Reforms Its Schools.
Mastery Learning. [...] ---Outline:(01:06) Mississippi Can Read Now(02:24) What Mississippi and Louisiana Did(09:10) Spies In Every Classroom(10:41) Mississippi Results Are Not Due To Retention(15:54) Is Retention Helpful In General?(19:46) At Eighth Grade A Lot Of This Improvement Remains(20:41) England Reforms Its Schools(21:45) Mastery Learning(24:16) The War Against Reading(26:24) Is Our Children Reading(26:55) No One Reads Anymore The original text contained 2 footnotes which were omitted from this narration. ---
First published:
May 11th, 2026
Source:
https://www.lesswrong.com/posts/dm2vQZPZcSKb8FhWw/childhood-and-education-17-is-our-children-reading
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 11, 2026 • 3min
“AI companies are already profitable (in the way that matters)” by Yair Halberstadt
I've occasionally heard people suggest that at some point AI companies are going to run out of money, the cost of using AI will shoot up, demand will collapse, and the AI bubble will be over.
At first glance this risk seems real. OpenAI spent $25 billion in the first half of 2025, on revenue of just $4 billion. Whilst data is sorely lacking for other top AI labs, our best guess is that they're burning through cash at similar rates. Scaling laws imply that we need exponentially more compute to achieve linear AI performance improvements, so we should only expect this situation to worsen in the future. A few more doublings, and OpenAI could be spending hundreds of billions on training runs - something likely unsustainable even for the largest tech companies.
However most of these expenses are infrastructure expenses, building out the data centres needed for further training runs and serving future customers. If we look at the actual cost of serving, AI labs are already profitable, and have been for a long time.
In other words the marginal cost to respond to an AI API call is significantly lower than the price of [...] ---
First published:
May 11th, 2026
Source:
https://www.lesswrong.com/posts/Rz9ubmfyDxTzaoYFL/ai-companies-are-already-profitable-in-the-way-that-matters
---
Narrated by TYPE III AUDIO.

May 11, 2026 • 30min
“The Iliad Intensive Course Materials” by Leon Lang, David Udell, Alexander Gietelink Oldenziel
We are releasing the course materials of the Iliad Intensive, a new month-long and full-time AI Alignment course that runs in-person every second month. The course targets students with strong backgrounds in mathematics, physics, or theoretical computer science, and the materials reflect that: they include mathematical exercises with solutions, self-contained lecture notes on topics like singular learning theory and data attribution, and coding problems, at a depth that is unmatched for many of the topics we cover. Around 20 contributors (listed further below) were involved in developing these materials for the April 2026 cohort of the Iliad Intensive. By sharing the materials, we hope to create more common knowledge about what the Iliad Intensive is;invite feedback on the materials;and allow others to learn via independent study. We are developing the materials further and plan to eventually release them on a website that will be continuously maintained. We will also add, remove, and modify modules going forward to improve and expand the course over time. When we release a new significantly updated version of the materials, we will update this post to link the new version. Modules The Iliad Intensive is structured into clusters, which are [...] ---Outline:(01:26) Modules(02:32) Cluster A: Alignment(05:00) Cluster B: Learning(11:00) Cluster C: Abstractions, Representations, and Interpretability(15:40) Cluster D: Agency(19:23) Cluster E: Safety Guarantees and their Limits(23:04) Contributors(26:36) Impressions from April(29:02) Acknowledgments(29:11) Feedback ---
First published:
May 11th, 2026
Source:
https://www.lesswrong.com/posts/dWQnLi7AoKo3paBXF/the-iliad-intensive-course-materials
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 11, 2026 • 31min
“Empowerment, corrigibility, etc. are simple abstractions (of a messed-up ontology)” by Steven Byrnes
1.1 Tl;dr Alignment is often conceptualized as AIs helping humans achieve their goals: AIs that increase people's agency and empowerment; AIs that are helpful, corrigible, and/or obedient; AIs that avoid manipulating people. But that last one—manipulation—points to a challenge for all these desiderata: a human's goals are themselves under-determined and manipulable, and it's awfully hard to pin down a principled distinction between changing people's goals in a good way (“providing counsel”, “providing information”, “sharing ideas”) versus a bad way (“manipulating”, “brainwashing”). The manipulability of human desires is hardly a new observation in the alignment literature, but it remains unsolved (see lit review in §3 below). In this post I will propose an explanation of how we humans intuitively conceptualize the distinction between guidance (good) vs manipulation (bad), in case it helps us brainstorm how we might put that distinction into AI. …But (spoiler alert) it turns out not to really help, because I’ll argue that we humans think about it in a deeply incoherent way, intimately tied to our scientifically-inaccurate intuitions around free will. I jump from there into a broader review of every approach that I can think of for writing a “True Name” for manipulation or [...] ---Outline:(00:13) 1.1. Tl;dr(02:04) 1.2. Bigger-picture context: why is this issue so important to me?(04:48) 2. How do humans intuitively define empowerment, agency, manipulation, etc.?(04:56) 2.1. Background: human free will intuitions(09:20) 2.2. Our free-will-infused intuitive notions of empowerment, agency, manipulation, corrigibility, responsibility, etc.(12:00) 2.3. Another dimension: counsel vs manipulation as an emotive conjugation(13:07) 3. If the intuitive definitions of manipulation etc. reside in a messed-up ontology, has the alignment literature found any alternative, better way to define these concepts?(13:49) 3.1. Compare what the human wants to what the human would want under the null policy?(15:32) 3.2. The AI learns self-empowerment and generalizes to other-empowerment?(17:14) 3.3. Vingean agency?(19:03) 3.4. The AI doesnt care about (is not optimizing for) what the human winds up wanting?(21:01) 3.5. Impact minimization?(21:44) 3.6. Attainable utility preservation?(22:03) 4. Even more ideas (that dont really solve my problem)(22:15) 4.1. Game theory and incentive design?(22:47) 4.2. The persons judgments of what kinds of interactions are good vs bad?(24:14) 4.3. Its a messed-up ontology, but who cares?(25:35) 5. ...But doesnt this analysis equally disprove the possibility of human helpfulness?(30:14) 6. Conclusion The original text contained 4 footnotes which were omitted from this narration. ---
First published:
May 11th, 2026
Source:
https://www.lesswrong.com/posts/vzHtHHBJoKATi5SeK/empowerment-corrigibility-etc-are-simple-abstractions-of-a
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.


