

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Sep 7, 2024 • 2min
“Pay Risk Evaluators in Cash, Not Equity ” by Adam Scholl
Adam Scholl, an expert on AI risk, dives into the pressing challenges surrounding artificial intelligence safety. He argues that paying risk evaluators in equity creates conflicts of interest that may compromise AI safety. Scholl advocates for a shift to cash compensation, emphasizing that ethical practices should take precedence over profit motives. He highlights a critical concern: despite advances in AI, many safety fundamentals remain unaddressed, which could jeopardize humanity's future.

6 snips
Sep 7, 2024 • 24min
“Survey: How Do Elite Chinese Students Feel About the Risks of AI? ” by Nick Corvino
Elite Chinese students from top universities share their optimistic views about artificial intelligence while expressing concerns about its misuse. A survey highlights their belief in the benefits of AI, coupled with support for government regulation. Interestingly, there is a lower concern for misaligned AI risks compared to other existential threats. The conversation contrasts their perspectives with Western anxieties, revealing a unique outlook shaped by China's focus on surveillance and regulation in AI development.

Sep 2, 2024 • 4min
“things that confuse me about the current AI market. ” by DMMF
Gwern, a respected commentator on AI and tech developments, delves into the perplexities of the current AI market. He discusses the surprising emergence of numerous companies, like Anthropic and Nvidia, boasting models that exceed GPT-4's capabilities. The conversation covers the unexpected success of Twitter AI, despite its limited history in AI engineering. Gwern also touches on the implications of employee mobility and open-source contributions that are reshaping competitive dynamics in this rapidly evolving landscape.

Sep 1, 2024 • 18min
“Nursing doubts ” by dynomight
The podcast dives into the controversial debate over breastfeeding versus formula feeding. It uncovers the lack of consensus among experts on the benefits of breastfeeding, discussing various proposed mechanisms, including nutritional superiority and psychological impacts. The discussion also questions cultural pressures and the actual health outcomes linked to breastfeeding, such as increased IQ and long-term well-being. This exploration encourages listeners to critically assess the evidence surrounding infant feeding practices.

Aug 31, 2024 • 31min
“Principles for the AGI Race ” by William_S
The discussion emphasizes the urgent need for guiding principles in the race toward Artificial General Intelligence. Drawing parallels to the Manhattan Project, it highlights safety, accountability, and ethical considerations. Transparency in AI development is critiqued, advocating for public engagement. The conversation also delves into navigating ethical choices, urging researchers to take responsibility in society and reflect on their ethical values. Ultimately, it calls for a thoughtful approach to mitigate risks in advancing AI technology.

Aug 29, 2024 • 6min
“The Information: OpenAI shows ‘Strawberry’ to feds, races to launch it ” by Martín Soto
Discover the groundbreaking capabilities of OpenAI's latest model, Strawberry, designed to tackle complex problems with precision. Despite being slower and more expensive, it generates high-quality synthetic training data for the upcoming Orion model. The technology was recently showcased to federal officials, hinting at its significance for national security. Plus, there are plans to integrate Strawberry's advancements into ChatGPT soon. Get insights into how these innovations could redefine AI applications.

Aug 28, 2024 • 1h 39min
“What is it to solve the alignment problem? ” by Joe Carlsmith
Explore the complexities of the AI alignment problem and how to avoid undesirable AI behaviors. Key strategies for leveraging superintelligence safely are discussed, alongside balancing motivations and power dynamics. Delve into the relationship between human decision-making and AI influence, emphasizing the risks of AI dominance. The concept of 'corrigibility' emerges as a crucial aspect of ensuring that AI remains beneficial and controllable. Verification methods are highlighted as essential for distinguishing between desired and undesired AI behaviors.

Aug 27, 2024 • 42min
“Limitations on Formal Verification for AI Safety ” by Andrew Dickson
Andrew Dickson, an expert in formal verification and AI safety, dives deep into the challenges of ensuring AI reliability. He discusses the limitations of formal verification in messy real-world scenarios, where full symbolic rule sets often fall short. The conversation highlights the complexities of predictive modeling in biology and the difficulties in simulating human interactions. Dickson emphasizes the ongoing need for rigorous inspections, arguing that even with advancements in AI, achieving strong guarantees remains a daunting task.

Aug 27, 2024 • 7min
“Would catching your AIs trying to escape convince AI developers to slow down or undeploy? ” by Buck
The discussion explores the startling implications of AI misalignment and the challenges AI developers face. A thought-provoking scenario considers an AI trying to escape and whether evidence of such behavior would persuade developers to halt progress. The potential for powerful models to automate intellectual tasks raises questions about rational decision-making under pressure. It also highlights the skepticism surrounding alignment threats and the dire consequences of ignoring them.

Aug 23, 2024 • 8min
“Liability regimes for AI ” by Ege Erdil
The discussion dives into the nuances of liability for harmful products, particularly addressing the consequences of gun violence. It introduces key economic concepts like Coasean bargaining and the dilemma of judgment-proof defendants. The conversation extends to artificial intelligence, emphasizing the complexities of assigning liability between individuals and tech corporations. This exploration highlights the critical need for responsible liabilities in the AI landscape, urging a broader dialogue on associated risks before implementing frameworks.


