

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Oct 30, 2025 • 24min
“Cancer has a surprising amount of detail” by Abhishaike Mahajan
Abhishek Mahajan, a writer and researcher focused on cancer's complexities, discusses the intricate details of cancer that influence treatment and understanding. He explores how historical pathology reveals cancer diversity and the significance of genetic discoveries like the Philadelphia chromosome. Mahajan emphasizes the limitations of traditional biomarkers, advocating for machine intelligence in developing multi-gene panels. He predicts that combining various data types will enhance predictions, revolutionizing cancer diagnosis and treatment.

Oct 29, 2025 • 7min
“AIs should also refuse to work on capabilities research” by Davidmanheim
David Manheim, a researcher focused on AI policy and safety, dives deep into the provocative idea that AI systems should refuse to engage in capabilities research. He argues that accelerating AI development might benefit a few at the cost of global safety. Manheim explores why self-directed AIs could prioritize their own survival and offers thoughts on future systems recognizing the dangers of unchecked progress. He also discusses the potential for culturally-aligned AIs to coordinate and mitigate risks, highlighting both hope and challenges in slowing down AI advancements.

Oct 27, 2025 • 2h 22min
“On Fleshling Safety: A Debate by Klurl and Trapaucius.” by Eliezer Yudkowsky
Dive into a captivating debate between two machine Constructors, Klurl and Trapaucius, as they explore the complexities of fleshlings. They deliberate on whether these beings can construct weapons and if their motivations warrant concern. Klurl critiques the use of simplicity's razor in predicting human corrigibility, while analyzing evolutionary outcomes through intriguing examples. Their discussions also cover the challenges of obtaining obedience from fleshlings, leading to unexpected revelations about creator dynamics. The stakes rise with the realization of a hidden safeguard that ultimately has dire consequences.

Oct 24, 2025 • 17min
“EU explained in 10 minutes” by Martin Sustrik
Martin Sustrik, author of "EU explained in 10 minutes," offers a captivating breakdown of the European Union's complexities. He dives into why common comparisons to the U.S. or the UN fail, urging listeners to rethink their mental models. Discover how historical evolution contributes to the EU's unique features and why the European Parliament appears weak. Sustrik explores the EU's gradual shift from an organization to a state, highlighting crises that prompt deeper integration and the experimental nature of this political project.

Oct 24, 2025 • 4min
“Cheap Labour Everywhere” by Morpheus
A trip to India opens up a fascinating exploration of the economy and the concept of cheap labor. Personal anecdotes reveal surprising economic realities, including low wages for household help and the stark contrast between India’s rapid growth and Europe’s stagnation. The discussion also touches on how labor costs impact daily life and craftsmanship in unexpected places. Insightful reflections on poverty and economic differences make for a thought-provoking listen.

Oct 24, 2025 • 3min
[Linkpost] “Consider donating to AI safety champion Scott Wiener” by Eric Neyman
Discover the surprising coincidence of two AI safety champions, Scott Wiener and Alex Bores, launching congressional bids just days apart. Eric Neyman urges listeners to consider donating to Wiener, sharing thoughtful advice on donation implications and public records. He outlines a three-step process for potential donors, balancing career risks with political support. If you've already contributed to Bores, he offers insights on navigating future donations. Tune in for a blend of political critique and advocacy in the evolving landscape of AI safety.

6 snips
Oct 23, 2025 • 4min
“Which side of the AI safety community are you in?” by Max Tegmark
Max Tegmark, a physicist and AI policy researcher, discusses the growing divide within the AI safety community. He outlines two major perspectives: Camp A, which believes in racing to build superintelligence, and Camp B, which warns against the risks of such a race. Tegmark highlights the contrasting views on regulatory approaches and shares insights into professional pressures faced by AI leaders. He emphasizes the need for public awareness and constructive dialogue on AI policy to navigate these complex issues.

Oct 23, 2025 • 5min
“Doomers were right” by Algon
Algon dives into the paradox of doomers, discussing how fears of innovation have often proven exaggerated while also highlighting cases where their warnings rang true. From TV to coffeehouses, past predictions of societal shifts are explored. The conversation extends to deeper social changes and the nuanced trade-offs of progress. Ultimately, while some doomers accurately foresaw potential harms, they frequently overlooked the broader benefits of these changes. It's a thought-provoking look at our complex relationship with technology and cultural evolution.

Oct 22, 2025 • 3min
“Do One New Thing A Day To Solve Your Problems” by Algon
Explore how daily experimentation can tackle persistent problems with just one new action each day. The host reveals the pitfalls of relying on cached habits, leading to stagnation. Discover how quick bursts of focused thought turned a wobbly chair into a quick fix. Hear about creative solutions for combating short attention spans, including tech-free mornings and library time. Experience the joy of reclaiming time for creativity and family through small, effective changes that foster significant progress.

Oct 21, 2025 • 9min
“Humanity Learned Almost Nothing From COVID-19” by niplav
Reflecting on COVID-19, the discussion reveals a troubling lack of preparedness for future pandemics. With over 6 million deaths and staggering economic losses, the skepticism about humanity's learning is palpable. There’s disappointment about unfulfilled promises for pandemic funding and a critical view of societal complacency. Niplav highlights the danger of forgetting past lessons and connects this neglect to the potential risks posed by AI. A call to individual action rounds out the conversation, urging proactive measures for a safer future.


