

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Aug 7, 2024 • 9min
“You don’t know how bad most things are nor precisely how they’re bad.” by Solenoid_Entity
Dive into the intriguing world of discernment, where time and attention significantly enhance our understanding of quality. Explore the nuances of piano tuning, revealing how even experts struggle to detect subtle flaws. Discover the complexities of awareness, and how often we overlook our own blind spots. This discussion highlights the perils of relying on automation in tasks requiring skilled judgment, emphasizing the intricate details in reality that often go unnoticed.

Aug 7, 2024 • 22min
“Recommendation: reports on the search for missing hiker Bill Ewasko” by eukaryote
Tom Mahood, an insightful blogger on missing persons, teams up with Adam Marsland, a dedicated videographer, to discuss the enigmatic 2010 disappearance of hiker Bill Ewasko in Joshua Tree National Park. They explore the complexities of wilderness searches, the essential strategies employed when looking for missing individuals, and the emotional toll faced by searchers. The conversation reveals the challenges in navigating both the terrain and the psychological aspects of such tragic cases, shedding light on the critical lessons learned from this heartbreaking incident.

Aug 7, 2024 • 30min
“The ‘strong’ feature hypothesis could be wrong” by lsgos
Elhage, a member of the Google DeepMind language model interpretability team, dives deep into the complexities of AI interpretability. They challenge the strong feature hypothesis, arguing that neurons may not correspond to specific visual features as previously thought. The discussants explore the intricate dynamics of explicit versus tacit representations, using chess as a metaphor for decision-making. Elhage also calls for a reevaluation of how we interpret neural networks, advocating for methods that account for context-dependent features.

Jul 30, 2024 • 4min
“‘AI achieves silver-medal standard solving International Mathematical Olympiad problems’” by gjm
Explore groundbreaking advancements in AI with Google DeepMind's latest systems, AlphaProof and AlphaGeometry. These innovations tackle complex mathematical problems, nearing silver-medal standards for the International Mathematical Olympiad. Discover how AlphaProof uses LLMs and proof-checking to refine solutions, while AlphaGeometry excels in geometry tasks. The training process includes real-time reinforcement during contests, making for a fascinating insight into the future of problem-solving AI!

Jul 29, 2024 • 24min
“Decomposing Agency — capabilities without desires” by owencb, Raymond D
In this insightful discussion, Raymond D, a thinker on agency and technology, dives into the slippery concept of what constitutes an agent. He explores how breaking agency into components like goals and planning can reshape our views, especially regarding advanced AI systems. The conversation touches on the potential future of technology, emphasizing the role of collective decisions and scenario mapping in guiding the development of AGI. Raymond's perspectives challenge us to rethink our assumptions about intelligence, both human and artificial.

Jul 27, 2024 • 16min
“Universal Basic Income and Poverty” by Eliezer Yudkowsky
Eliezer Yudkowsky, a renowned thinker in artificial intelligence and rationality, discusses the complexities of Universal Basic Income (UBI) and its limitations in combating poverty. He highlights how advancements in material wealth don't necessarily eliminate deprivation, using historical comparisons to illustrate this paradox. Yudkowsky also ventures into unique societal structures, like Anok Sistan, to showcase how poverty transcends mere material possession. Ultimately, he calls for more profound economic research to understand the true nature of poverty.

Jul 19, 2024 • 13min
“Optimistic Assumptions, Longterm Planning, and ‘Cope’” by Raemon
Raemon discusses the pitfalls of making assumptions for AI planning, highlighting the dangers of 'cope-y' reasoning. Examples of questionable plans and the necessity of realistic long-term planning are explored. The podcast delves into the challenges of aligning assumptions with outcomes, emphasizing the importance of critical thinking in AI safety.

Jul 15, 2024 • 19min
“Superbabies: Putting The Pieces Together” by sarahconstantin
Discussing the concept of creating 'designer babies' with desired traits through gene manipulation. Topics include polygenic scores for predictions, challenges of gene editing, embryo selection, iterated meiosis, naive pluripotent cells, and the uncertainties in creating designer babies.

Jul 12, 2024 • 18min
“Poker is a bad game for teaching epistemics. Figgie is a better one.” by rossry
Exploring the utility of poker in teaching epistemics and decision-making, comparing it to games like Candy Land and Chess. Comparing Figgy with poker as teaching tools, highlighting the advantages of Figgy including more feedback and active participation.

Jul 11, 2024 • 1h 22min
“Reliable Sources: The Story of David Gerard” by TracingWoodgrains
David Gerard, a long-standing critic of rationalist and EA communities, discusses his standards for reliable sources, his involvement with LessWrong and Effective Altruism, conflicts within online communities, and his contentious actions on Wikipedia, leading to bans from editing certain articles.


