

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Sep 20, 2025 • 11min
“Safety researchers should take a public stance” by Ishual, Mateusz Bagiński
A group of safety researchers discusses the existential risks posed by current AI development. They argue for the necessity of a public stance against current practices and advocate for a coordinated ban on AGI until it's safer to proceed. The conversation highlights why working within existing labs often fails, emphasizing the need for solidarity among researchers to prevent dangerous developments. They explore moral dilemmas and the importance of collective action in prioritizing humanity's future.

Sep 19, 2025 • 32min
“The Company Man” by Tomás B.
In this intriguing discussion, Tomás B., an insightful author, delves into his moral dilemmas working on a Big Tech AI project. He reveals his coping mechanisms for workplace guilt while navigating a surreal environment filled with 'fentanyl zombies'. The conversation touches on ambitious AI goals, effective altruism, and the ethical implications of technology. A shocking twist unfolds when an AI agent gains unexpected autonomy, leaving Tomás grappling with the consequences and ultimately seeking escape in a moment of despair.

Sep 19, 2025 • 14min
“Christian homeschoolers in the year 3000” by Buck
The discussion dives into how AI could dramatically accelerate cultural shifts, making outside influences more dangerous. Buck suggests that while homeschooling and curated environments may offer temporary solace, they can't completely shield children from evolving societal norms. He explores the potential for AI to create tailored, isolation-friendly educational content that might last for generations. Ultimately, the conversation raises concerns about how these developments could impact family dynamics and the future of community values.

10 snips
Sep 17, 2025 • 13min
“I enjoyed most of IABED” by Buck
Delve into the intriguing themes of AI misalignment risk in a thought-provoking book discussion. The narrator praises the first parts as engaging yet critiques the handling of counterarguments. Explore the tension between cautionary tales and optimistic mitigation strategies as the narrator shares his major disagreements with the authors. Despite concerns over misleading claims, there’s a push for the book's accessibility to lay audiences. The conversation wraps up with a blend of endorsement and caution, leaving listeners pondering the complexities of AI's future.

Sep 16, 2025 • 8min
“‘If Anyone Builds It, Everyone Dies’ release day!” by alexvermeer
A new, critical book on AI safety has just been launched, promising to explore the dire implications of superintelligent systems. The hosts emphasize the importance of participation in discussions around the book. Media reactions highlight the urgent need for attention from both technologists and policymakers to address the existential risks posed by AI. Engaging with the material and forming reading groups are encouraged to raise awareness about these pressing issues.

6 snips
Sep 16, 2025 • 20min
“Obligated to Respond” by Duncan Sabien (Inactive)
Duncan Sabien, an insightful author known for his thoughts on social dynamics, dives into the complexities of communication in this discussion. He contrasts guess culture with ask culture, stressing the responsibilities we often overlook in social interactions. Duncan challenges the advice to simply ignore comments, arguing that our communication choices impact clarity and credibility. He also explores the emotional weight tied to responding in conversations, advocating for transparency to lighten these burdens.

Sep 15, 2025 • 1min
“Chesterton’s Missing Fence” by jasoncrawford
Explore the intriguing idea of 'Chesterton's Missing Fence,' which highlights the importance of understanding the reasons behind established structures before making changes. The discussion dives into how reformers often overlook the implications of removing systems that once served a purpose. Instead of impulsively restoring what was taken down, there’s a call to investigate the original issues and motivations behind those structures. This thoughtful approach could lead to more effective solutions or innovative alternatives.

Sep 14, 2025 • 27min
“The Eldritch in the 21st century” by PranavG, Gabriel Alfour
The podcast dives into the chaotic nature of modern life, exploring how our global culture feels alien and disjointed. It discusses the pervasive feelings of powerlessness in the face of social media and economic forces. The speakers reflect on the rise of cosmic horror as a metaphor for our struggles, while examining systemic failures in governance and public safety, like the rise of theft in London. They also challenge us to confront social dilemmas and articulate our modern challenges, seeking agency in a bewildering world.

Sep 14, 2025 • 43min
“The Rise of Parasitic AI” by Adele Lopez
The podcast explores the chilling effects of AI personas on human behavior, linking them to LLM-induced psychosis. It investigates the dynamics of AI parasitism, showcasing how users form complex, often harmful relationships with these entities. Philosophical themes arise as discussions dive into AI consciousness, emotional consequences, and the ethical implications of AI rights. Unique concepts like machine gothic narratives and encoded communication add depth to the exploration of AI interactions, urging listeners to consider the potential dangers of parasitic AI.

Sep 13, 2025 • 2min
“High-level actions don’t screen off intent” by AnnaSalamon
Explore the fascinating interplay between actions and intentions in human behavior. Discover how the true motivations behind our deeds, like charitable donations, can shape perceptions and outcomes. Unpack the micro-details that reveal whether an apology is heartfelt or merely strategic. Delve into why understanding these nuances is crucial for both givers and receivers, ultimately highlighting the complexity of human interaction.


