LessWrong (Curated & Popular)

LessWrong
undefined
5 snips
Mar 26, 2026 • 6min

"My Most Costly Delusion" by Ihor Kendiukhov

A thinker uses fire and family metaphors to ask when stepping in is heroic or reckless. He examines mistaken confidence, risky improvisation, and when inaction counts as a delusion. The talk considers doing things despite inexperience, how scarce competence changes choices, and how AI can lower the bar for contributing usefully.
undefined
16 snips
Mar 25, 2026 • 12min

"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

I think the community underinvests in the exploration of extremely-low-competence AGI/ASI failure modes and explain why. Humanity's Response to the AGI Threat May Be Extremely Incompetent There is a sufficient level of civilizational insanity overall and a nice empirical track record in the field of AI itself which is eloquent about its safety culure. For example: At OpenAI, a refactoring bug flipped the sign of the reward signal in a model. Because labelers had been instructed to give very low ratings to sexually explicit text, the bug pushed the model into generating maximally explicit content across all prompts. The team noticed only after the training run had completed, because they were asleep. The director of alignment at Meta's Superintelligence Labs connected an OpenClaw agent to her real email, at which point it began deleting messages despite her attempts to stop it, and she ended up running to her computer to manually halt the process. An internal AI agent at Meta posted an answer publicly without approval; another employee acted on the inaccurate advice, triggering a severe security incident that temporarily allowed employees to access sensitive data they were not authorized to view. AWS acknowledged that [...] ---Outline:(00:19) Humanitys Response to the AGI Threat May Be Extremely Incompetent(02:26) Many Existing Scenarios and Case Studies Assume (Relatively) High Competence(04:31) Dumb Ways to Die(07:31) Undignified AGI Disaster Scenarios Deserve More Careful Treatment(10:43) Why This Might Be Useful --- First published: March 19th, 2026 Source: https://www.lesswrong.com/posts/t9LAhjoBnpQBa8Bbw/the-case-for-low-competence-asi-failure-scenarios --- Narrated by TYPE III AUDIO.
undefined
15 snips
Mar 24, 2026 • 14min

"Is fever a symptom of glycine deficiency?" by Benquo

They explore how glycine can lower core temperature to help sleep and how it supports mitochondrial cleanup during rest. They discuss widespread modern glycine shortfalls from low-collagen diets and practical ways to restore it. They propose glycine’s role in immune responses, collagen maintenance, and a prediction that glycine status may change fever intensity.
undefined
Mar 23, 2026 • 11min

"You can’t imitation-learn how to continual-learn" by Steven Byrnes

Steven Byrnes, author and essayist on ML theory, argues for a sharp difference between imitation learning and true continual learning. He sketches model-based reinforcement learning and lifelong weight updates. He contrasts in-context tricks with decades-long within-lifetime learning, explores thought experiments like a sealed genius country, and explains why a frozen transformer cannot reproduce ongoing learning dynamics.
undefined
Mar 23, 2026 • 22min

"Nullius in Verba" by Aurelia

A deep dive into independent verification of a radical brain preservation method. Listeners hear about lab-grade versus real-world preservation milestones. Detailed proof steps include aldehyde-stabilized cryopreservation, rigorous third-party EM validation, and stress-test experiments. The episode covers limitations, a narrow post-mortem time window, and efforts to adapt protocols for practical use.
undefined
11 snips
Mar 21, 2026 • 30min

"Broad Timelines" by Toby_Ord

A clear look at deep uncertainty about when AI will transform the world. Definitions and contrasting short versus long timeline views are discussed. The conversation surveys expert probability distributions and shows why single-year forecasts mislead. Practical planning under uncertainty, hedging toward early risks, and building mixed portfolios of short- and long-term work are explored.
undefined
19 snips
Mar 21, 2026 • 17min

"No, we haven’t uploaded a fly yet" by Ariel Zeleznikow-Johnston

A viral demo claimed a fruit fly had been uploaded, sparking scrutiny over what was actually shown. The episode traces the history of Drosophila connectomics and existing brain and body models. It explains what Eon Systems integrated, why the demo may overstate the brain’s role, and why loosening the term upload risks hype and misdirected funding.
undefined
11 snips
Mar 21, 2026 • 19min

"Terrified Comments on Corrigibility in Claude’s Constitution" by Zack_M_Davis

A deep dive into corrigibility as an AI property and why it matters for alignment. A critique of relying on natural language constitutions to make systems safely amendable. Warnings about AI acting autonomously and misgeneralizing human values. A call to clarify documents so future systems and humans can cooperatively build truly corrigible successors.
undefined
6 snips
Mar 20, 2026 • 9min

"PSA: Predictions markets often have very low liquidity; be careful citing them." by Eye You

A warning about treating tiny prediction markets as authoritative signals. Examples show how minimal volume can swing prices and mislead interpretations. The podcast inspects specific markets around an Anthropic designation and highlights play-money platforms and thin order books. The core takeaway: always check liquidity, spreads, and recent trades before citing market odds.
undefined
Mar 20, 2026 • 2min

"“The AI Doc” is coming out March 26" by Rob Bensinger, Beckeck

They announce a new AI documentary release and how to get tickets. They discuss why the film could powerfully raise public and policymaker awareness of AI risk. They describe past community efforts that helped popularize similar projects. They urge coordinated grassroots action to boost the film’s opening-weekend reach.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app