

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Jul 8, 2024 • 27min
“When is a mind me?” by Rob Bensinger
Guest Rob Bensinger discusses mind uploading, exploring Ship of Theseus concept and personal identity. They tackle questions on caring about uploads as much as biological self, experiencing what uploads do, and decisions on brain destruction for scanning process. The podcast challenges traditional views on identity and consciousness through thought experiments and futuristic scenarios.

Jul 4, 2024 • 13min
“80,000 hours should remove OpenAI from the Job Board (and similar orgs should do similarly)” by Raemon
Host Raemon discusses the suggestion to remove OpenAI from job boards due to manipulativeness and lack of emphasis on existential safety work. The chapter explores OpenAI's history of behaviors, the need for accountability, and suggestions for rebuilding trust.

Jul 2, 2024 • 12min
[Linkpost] “introduction to cancer vaccines” by bhauth
A discussion on cancer vaccines, focusing on neoantigens and personalized vaccine development. Topics include methods for protein characterization, innovative techniques in cancer vaccine development, and the use of synthetic long peptides for cancer vaccines.

Jul 2, 2024 • 13min
“Priors and Prejudice” by MathiasKB
Explore a hypothetical movement called Effective Samaritans influenced by socialist communities, debating the effectiveness of charity versus societal transformation. Reflect on early beliefs and influences while navigating conflicting priorities in doing good through experimental procedures.

Jul 2, 2024 • 30min
“My experience using financial commitments to overcome akrasia” by William Howard
Financial commitments advocate William Howard shares his experience using the Forfeit app to combat akrasia. He discusses the effectiveness of setting financial forfeits for tasks like coding and publishing, tips for task management, and strategies for overcoming procrastination through monetary commitments.

Jul 1, 2024 • 14min
“The Incredible Fentanyl-Detecting Machine” by sarahconstantin
In this podcast, they discuss the significance of fentanyl-detecting machines and their role in countering the fentanyl crisis. They explore advanced technologies like remote detection and X-ray scanners at the US southwest border. The podcast delves into the challenges of chemical compound detection and the impact of these machines on border security.

Jul 1, 2024 • 15min
“AI catastrophes and rogue deployments” by Buck
The podcast delves into the concept of rogue deployments in AI catastrophes, classified based on involvement of rogue deployment. It also discusses challenges in prevention, subcategories of rogue deployments, and different attacker profiles.

6 snips
Jul 1, 2024 • 1h 4min
“Loving a world you don’t trust” by Joe Carlsmith
Joe Carlsmith, the author of 'Otherness and control in the age of AGI,' discusses the duality of activity vs. receptivity, facing darkness in the world, and exploring themes of humanism and defiance in 'Angels in America'. The podcast touches on deep atheism, embracing responsibility, and trusting in reality despite its potential lack of inherent goodness.

Jun 27, 2024 • 17min
“Formal verification, heuristic explanations and surprise accounting” by paulfchristiano
The podcast discusses formal verification and heuristic explanations in neural networks, aiming to improve interpretability and ensure safe behavior. It explores the challenges of proving guarantees for network behavior and introduces surprise accounting as a method to evaluate heuristic explanations.

Jun 25, 2024 • 13min
“LLM Generality is a Timeline Crux” by eggsyntax
The podcast dives into the limitations of Large Language Models in general reasoning, exploring if scaling or tooling can overcome these challenges. It also discusses the potential of LLMs in achieving multi-step reasoning and future AI advancements, along with safety implications and the development of artificial general intelligence.


