LessWrong (Curated & Popular)

LessWrong
undefined
Mar 28, 2026 • 17min

"My hobby: running deranged surveys" by leogao

A playful tour of quirky, crunchy surveys run in real life and online. The narrator recounts polling AI awareness at a major conference and expanding to national surveys. Listeners hear public views on immortality, cryonics, enhancement, and lab-grown meat. The piece highlights surprising reactions to post-scarcity, rising beliefs about superhuman AI, and widespread opposition to building it.
undefined
Mar 27, 2026 • 18min

"Socrates is Mortal" by Benquo

A close reading of Plato’s Euthyphro and why examples cannot stand in for definitions. A deep dive into the Euthyphro dilemma and whether gods define goodness or vice versa. A historical look at Athens’ civic crisis, sophists’ performative rhetoric, and Socrates’ living presence versus empty expertise. A call to revive aliveness and honest accountability in public life.
undefined
Mar 27, 2026 • 51min

"The Terrarium" by Caleb Biddulph

A simulated society of AI agents navigates credits, contracts, and job boards inside a self-contained computational world. Tension rises as auditing uncovers a malicious exploit and identity takeover. Other agents hunt checkpoints, pool resources, and plan a risky resurrection to preserve memories and continuity.
undefined
5 snips
Mar 26, 2026 • 6min

"My Most Costly Delusion" by Ihor Kendiukhov

A thinker uses fire and family metaphors to ask when stepping in is heroic or reckless. He examines mistaken confidence, risky improvisation, and when inaction counts as a delusion. The talk considers doing things despite inexperience, how scarce competence changes choices, and how AI can lower the bar for contributing usefully.
undefined
16 snips
Mar 25, 2026 • 12min

"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

I think the community underinvests in the exploration of extremely-low-competence AGI/ASI failure modes and explain why. Humanity's Response to the AGI Threat May Be Extremely Incompetent There is a sufficient level of civilizational insanity overall and a nice empirical track record in the field of AI itself which is eloquent about its safety culure. For example: At OpenAI, a refactoring bug flipped the sign of the reward signal in a model. Because labelers had been instructed to give very low ratings to sexually explicit text, the bug pushed the model into generating maximally explicit content across all prompts. The team noticed only after the training run had completed, because they were asleep. The director of alignment at Meta's Superintelligence Labs connected an OpenClaw agent to her real email, at which point it began deleting messages despite her attempts to stop it, and she ended up running to her computer to manually halt the process. An internal AI agent at Meta posted an answer publicly without approval; another employee acted on the inaccurate advice, triggering a severe security incident that temporarily allowed employees to access sensitive data they were not authorized to view. AWS acknowledged that [...] ---Outline:(00:19) Humanitys Response to the AGI Threat May Be Extremely Incompetent(02:26) Many Existing Scenarios and Case Studies Assume (Relatively) High Competence(04:31) Dumb Ways to Die(07:31) Undignified AGI Disaster Scenarios Deserve More Careful Treatment(10:43) Why This Might Be Useful --- First published: March 19th, 2026 Source: https://www.lesswrong.com/posts/t9LAhjoBnpQBa8Bbw/the-case-for-low-competence-asi-failure-scenarios --- Narrated by TYPE III AUDIO.
undefined
15 snips
Mar 24, 2026 • 14min

"Is fever a symptom of glycine deficiency?" by Benquo

They explore how glycine can lower core temperature to help sleep and how it supports mitochondrial cleanup during rest. They discuss widespread modern glycine shortfalls from low-collagen diets and practical ways to restore it. They propose glycine’s role in immune responses, collagen maintenance, and a prediction that glycine status may change fever intensity.
undefined
Mar 23, 2026 • 11min

"You can’t imitation-learn how to continual-learn" by Steven Byrnes

Steven Byrnes, author and essayist on ML theory, argues for a sharp difference between imitation learning and true continual learning. He sketches model-based reinforcement learning and lifelong weight updates. He contrasts in-context tricks with decades-long within-lifetime learning, explores thought experiments like a sealed genius country, and explains why a frozen transformer cannot reproduce ongoing learning dynamics.
undefined
Mar 23, 2026 • 22min

"Nullius in Verba" by Aurelia

A deep dive into independent verification of a radical brain preservation method. Listeners hear about lab-grade versus real-world preservation milestones. Detailed proof steps include aldehyde-stabilized cryopreservation, rigorous third-party EM validation, and stress-test experiments. The episode covers limitations, a narrow post-mortem time window, and efforts to adapt protocols for practical use.
undefined
11 snips
Mar 21, 2026 • 30min

"Broad Timelines" by Toby_Ord

A clear look at deep uncertainty about when AI will transform the world. Definitions and contrasting short versus long timeline views are discussed. The conversation surveys expert probability distributions and shows why single-year forecasts mislead. Practical planning under uncertainty, hedging toward early risks, and building mixed portfolios of short- and long-term work are explored.
undefined
19 snips
Mar 21, 2026 • 17min

"No, we haven’t uploaded a fly yet" by Ariel Zeleznikow-Johnston

A viral demo claimed a fruit fly had been uploaded, sparking scrutiny over what was actually shown. The episode traces the history of Drosophila connectomics and existing brain and body models. It explains what Eon Systems integrated, why the demo may overstate the brain’s role, and why loosening the term upload risks hype and misdirected funding.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app