

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Nov 17, 2023 • 8min
"EA orgs' legal structure inhibits risk taking and information sharing on the margin" by Elizabeth
Elizabeth discusses how EA organizations' legal structure inhibits risk taking and information sharing. The challenges of forming a legally independent organization, loss of value, coordination costs, chilling effects, and restricted information sharing are explored. The impact of legal structures on risk-taking, information sharing, and confusion tolerance in fiscal sponsorship are highlighted.

Nov 17, 2023 • 40min
"Integrity in AI Governance and Advocacy" by habryka, Olivia Jimenez
In this podcast, habryka and Olivia Jimenez discuss their thoughts on a recent AI alignment conjecture post, exploring questions on advocacy, social network coordination, and the balance between advocacy and research. They also dive into topics such as governance challenges, stigmas of Effective Altruism, and strategies for gathering support while maintaining integrity.

Nov 16, 2023 • 10min
Loudly Give Up, Don’t Quietly Fade
1.There's a supercharged, dire wolf form of the bystander effect that I’d like to shine a spotlight on.First, a quick recap. The Bystander Effect is a phenomenon where people are less likely to help when there's a group around. When I took basic medical training, I was told to always ask one specific person to take actions instead of asking a crowd at large. “You, in the green shirt! Call 911!” (911 is the emergency services number in the United States.) One habit I worked hard to instill in my own head was that if I’m in a crowd that's asked to do something, I silently count off three seconds. If nobody else responds, I either decide to do it or decide not to do it and I say that. I like this habit, because the Bystander Effect is dumb and I want to fight it. Several [...]--- First published: November 13th, 2023 Source: https://www.lesswrong.com/posts/bkfgTSHhm3mqxgTmw/loudly-give-up-don-t-quietly-fade --- Narrated by TYPE III AUDIO.

Nov 9, 2023 • 1min
"The other side of the tidal wave" by Katja Grace
The podcast explores the distressing possibility of AI causing human extinction and the negative consequences it would have on various aspects of life if superhuman AI becomes a reality.

Nov 9, 2023 • 8min
[HUMAN VOICE] "Towards Monosemanticity: Decomposing Language Models With Dictionary Learning" by Zac Hatfield-Dodds
This podcast discusses the challenges of understanding artificial neural networks and the importance of recording neuron activations and testing responses. It explores the decomposition of language models with dictionary learning, the benefits of using features for interpretation, and the concept of decomposing models into interpretable features. The chapter also discusses the universality of learned features, potential benefits of decomposing models into a small or large set of features, and the challenges of scaling this approach to larger models.

Nov 9, 2023 • 17min
[HUMAN VOICE] "Deception Chess: Game #1" by Zane et al.
An experiment involving humans playing chess with advice from experts, two of whom are lying. Details about the first game of Deception Chess and the players involved. Discussion and analysis of the moves made in a chess game on Discord. Positive outcome of a game in a real-world AI scenario and plans for further experiments. Reflections on playing a deception Chess game against an AI opponent, including unexpected AI mistakes and speculation on future AI capabilities.

Nov 9, 2023 • 5min
"The 6D effect: When companies take risks, one email can be very powerful." by scasper
This podcast discusses the 6D effect, where documented communications of risks make companies more liable in court. It explores companies' liability for ignored risks and emphasizes the importance of discoverable documentation of dangers. The podcast sheds light on industry norms, legal discovery proceedings, and incentive structures related to risky system building.

Nov 9, 2023 • 50min
"Does davidad's uploading moonshot work?" by jacobjabob et al.
Exploring the proposal of uploading human consciousness before 2040, including the challenges and solutions for barcoding transmembrane proteins. Advancements in using visible light for studying molecules and structure are discussed. The potential of using Human Brain Organoids for testing and uploading aspects of the plan. The limitations of analyzing small parts of the brain and the importance of whole brain processes. The potential acceleration of research and engineering with AI. Exploring the process of uploading human brains into computers and discussing cost analysis.

Nov 9, 2023 • 16min
"My thoughts on the social response to AI risk" by Matthew Barnett
The podcast discusses the social response to AI risk, including recent evidence of society recognizing and addressing these risks. It analyzes the absence of a clear alarm for AI risk and explores the adoption of AI safety regulations. The chapter also delves into the unintended consequences of criminalizing circumvention and emphasizes the importance of thoughtful policymaking.

4 snips
Nov 9, 2023 • 42min
"Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk" by 1a3orn
This podcast examines a policy paper arguing for the ban of powerful open-source LLMs and exposes the lack of strong evidence supporting the conclusion. It discusses the potential role of open source AI models in bioweapon creation and the risks of unmitigated LLMs in biology. It explores flaws in an experiment and theoretical arguments on open-source LLMs, as well as the misrepresentation of evidence and funding patterns.


