LessWrong (Curated & Popular)

LessWrong
undefined
Nov 1, 2024 • 4min

“The Compendium, A full argument about extinction risk from AGI” by adamShimi, Gabriel Alfour, Connor Leahy, Chris Scammell, Andrea_Miotti

This is a link post.We (Connor Leahy, Gabriel Alfour, Chris Scammell, Andrea Miotti, Adam Shimi) have just published The Compendium, which brings together in a single place the most important arguments that drive our models of the AGI race, and what we need to do to avoid catastrophe.We felt that something like this has been missing from the AI conversation. Most of these points have been shared before, but a “comprehensive worldview” doc has been missing. We’ve tried our best to fill this gap, and welcome feedback and debate about the arguments. The Compendium is a living document, and we’ll keep updating it as we learn more and change our minds.We would appreciate your feedback, whether or not you agree with us: If you do agree with us, please point out where you think the arguments can be made stronger, and contact us if there are [...] --- First published: October 31st, 2024 Source: https://www.lesswrong.com/posts/prm7jJMZzToZ4QxoK/the-compendium-a-full-argument-about-extinction-risk-from --- Narrated by TYPE III AUDIO.
undefined
Oct 31, 2024 • 11min

“What TMS is like” by Sable

Discover the fascinating world of Transcranial Magnetic Stimulation (TMS) as a treatment for depression. One patient shares their firsthand experience, explaining the pre-treatment assessments and sensations felt during sessions. Learn about the gatekeeping around access to TMS and how it offers a non-invasive alternative to traditional antidepressants. The discussion highlights the challenges and successes of the treatment, emphasizing its rapid effectiveness and the commitment required for recovery. It's a captivating journey into mental health innovation.
undefined
Oct 28, 2024 • 29min

“The hostile telepaths problem” by Valentine

In this discussion, guest Valentine, an author known for exploring cognitive strategies at LessWrong, dives into the complexities of self-deception in social interactions. They address the 'hostile telepaths problem,' revealing how fear of others reading our thoughts can complicate communication. Valentine's unique insights include practical strategies like occlumency and the potential for embracing self-deception as a necessary tool. The conversation wraps up with thoughts on releasing the need for self-deception altogether.
undefined
Oct 27, 2024 • 11min

“A bird’s eye view of ARC’s research” by Jacob_Hilton

In this discussion, Jacob Hilton, author and researcher at ARC, delves into the intricate world of AI intent alignment research. He paints a cohesive picture of how various pieces of ARC's research interconnect within a unified vision. Hilton emphasizes significant challenges and innovative methodologies, shedding light on theoretical frameworks that guide their efforts. He also highlights future research directions, making a compelling case for the relevance of ARC's work in the evolving landscape of AI alignment.
undefined
Oct 25, 2024 • 3min

“A Rocket–Interpretability Analogy” by plex

The discussion explores the surprising link between the space race and AI alignment research. It examines how motivations differ across fields, revealing the influence of commercial interests on AI safety. The hosts ponder the impact of working on lofty scientific endeavors versus more sinister applications. There’s a deep dive into the idea of interpretability in AI, emphasizing its role in enhancing understanding and efficiency in neural networks. Tune in for a thought-provoking take on how these domains might share common challenges.
undefined
Oct 24, 2024 • 32min

“I got dysentery so you don’t have to” by eukaryote

Diving into a human challenge trial, the host shares their personal experience with shigellosis, a modern disease spread through poor hygiene. They discuss the innovative approach of bacteriophage therapy as a promising solution to antibiotic resistance. Anecdotes from participants highlight the unique mix of anxiety and humor in clinical trials. The podcast also sheds light on how Shigella disrupts bodily functions, while offering insight into the vital importance of recovery and hydration in the face of dysentery.
undefined
Oct 23, 2024 • 9min

“Overcoming Bias Anthology” by Arjun Panickssery

Arjun Panickssery, the author behind the "Overcoming Bias Anthology," explores how biases shape our decision-making. He explains the distinction between near and far thinking, and the curiosity surrounding our future ambitions. The conversation dives into the implications of artificial intelligence, examining both its potential and existential risks. Panickssery also tackles cognitive biases' roles in society, and the tension between idealism and our concrete behaviors. His insights challenge listeners to reconsider their perspectives on reality and decision-making.
undefined
Oct 22, 2024 • 12min

“Arithmetic is an underrated world-modeling technology” by dynomight

Explore how arithmetic transcends mere calculations to become a powerful world-modeling technology. Discover its applications in scientific domains, like nutrition research involving chimpanzees. Understand the significance of unit consistency in calculations. Dive into the fascinating challenges of estimating costs and sizes for massive steel blocks, using imaginative comparisons to iconic structures. This discussion unveils the hidden potential of arithmetic in grasping complex concepts.
undefined
Oct 15, 2024 • 25min

“My theory of change for working in AI healthtech” by Andrew_Critch

In this discussion, Andrew Critch, an AI alignment expert working in healthtech, shares his insights on the urgent need to address the risks of AI, particularly the impending arrival of AGI. He highlights concerns about industrial dehumanization and how it could threaten humanity. Critch advocates for developing human-centric industries, especially in healthcare, as a way to foster human welfare amidst rapid AI advancement. He emphasizes the importance of moral commitment in the sector to navigate the challenges posed by AI.
undefined
Oct 15, 2024 • 18min

“Why I’m not a Bayesian” by Richard_Ngo

Richard Ngo, author and philosopher, dives into his critiques of Bayesianism as a method of reasoning. He explains the core principles of Bayesianism, highlighting its focus on degrees of belief, and presents philosophical objections, such as the need for fuzzy truth values. Ngo emphasizes the importance of model-based reasoning and discusses the limitations of Bayesian methods in complex scientific modeling. He draws on insights from Karl Popper to explore how models can differ in structural accuracy and practical usefulness.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app