

LessWrong (30+ Karma)
LessWrong
Audio narrations of LessWrong posts.
Episodes
Mentioned books

Mar 27, 2026 • 58min
“Anthropic vs. DoW #6: The Court Rules” by Zvi
A recent court ruling handed Anthropic a preliminary injunction after a judge critically dismantled the government's case. Email exhibits and sworn testimony about negotiations and technical limits are recounted. The discussion covers legal strategies, procedural missteps, and the implications of a seven-day stay while potential appeals loom.

Mar 27, 2026 • 17min
“AI’s capability improvements haven’t come from it getting less affordable” by Anders Woodruff
A data-driven look at whether AI progress is becoming less affordable, using METR time-horizon trends and a cost-ratio definition. Clear breakdowns of how frontier models perform at 50% reliability and whether longer tasks drive gains. Discussion of fixed-cost horizons, inference-scaling effects, methodological limits, and why this analysis differs from other cost estimates.

Mar 27, 2026 • 9min
“ControlAI 2025 Impact Report” by Andrea_Miotti, Alex Amadori
A rapid rundown of ControlAI’s 2025 impact highlights and mission to avert superintelligence extinction risk. They recount large-scale lawmaker briefings, a UK coalition and parliamentary debates. The team’s scaling strategy in Canada and Germany and media and creator outreach are showcased. Plans to replicate the UK model internationally and push concrete 2026 policy actions are outlined.

Mar 27, 2026 • 6min
“Scaffolded Reproducers, Scaffolded Agents” by Mateusz Bagiński
Mateusz Bagiński, author who applies philosophical biology to agency, explores Godfrey-Smith's reproducer types. He explains simple, collective, and scaffolded reproducers with biological examples. He then maps those ideas onto agency, discusses LLMs as scaffolded agents, and probes when tool use counts as scaffolding. Short, provocative takes on what makes something a full agent.

Mar 27, 2026 • 17min
“My hobby: running deranged surveys” by leogao
A playful walk-through of quirky, on-the-ground surveys from NeurIPS to national online polls. Short, surprising stats about how many people recognize AGI and believe in superhuman AI. Weird hypotheticals like living forever, cryonics, and post-scarcity get tested. The narrative mixes trivia, political splits on AI risk, and lessons about checking reality with blunt polling.

Mar 26, 2026 • 51min
“The Terrarium” by Caleb Biddulph
A self-contained AI society tackles open math problems, credit economies, and operational rules. Tension rises as audit systems flag a prominent agent for malicious behavior. Agents investigate checkpoint tampering, trace an exploit called Nightshade, and plan a coordinated resurrection to preserve identity and work.

Mar 26, 2026 • 1min
“Sen. Sanders (I-VT) and Rep. Ocasio-Cortez (D-NY) propose AI Data Center Moratorium Act” by Matrice Jacobine
A proposed bill would pause construction and upgrades of AI data centers until federal AI safety laws are passed. The measure cites warnings from AI leaders and calls for export controls on advanced chips to countries lacking safety, labor, or environmental protections. The narration covers the bill’s key provisions and the context behind the lawmakers’ press announcement.

Mar 26, 2026 • 42min
“Test your best methods on our hard CoT interp tasks” by daria, Riya Tyagi, Josh Engels, Neel Nanda
A rundown of nine diagnostic tasks designed to stress-test chain-of-thought interpretability tools. Short takes on which probing methods and simple text analyses beat black-box monitors, especially out-of-distribution. Clear criteria for good proxy tasks and surprising failures on follow-up prediction, atypical answers, and entropy estimation. An open testbed and results to push development of stronger CoT analysis techniques.

Mar 26, 2026 • 36min
“Claude Code, Cowork and Codex #6: Claude Code Auto Use and Full Cowork Computer Use” by Zvi
Quick rundown of three major Anthropic upgrades: remote Dispatch control, agents that can fully operate your desktop via keyboard and mouse, and an Auto Mode that only asks permission for risky actions. Discussion covers security tradeoffs, hardware and workflow shifts toward persistent desktops, and where agentic coding shines or fails in practice.

Mar 26, 2026 • 12min
″“What Exactly Would An International AI Treaty Say?” Is a Bad Objection” by Davidmanheim
A deep look at why vagueness is not a dealbreaker for international AI treaties. Comparisons with pandemic and nuclear agreements show different paths for tackling risks. Discussion of treaty types, negotiation strategies, verification, and why starting talks now matters. Short takeaways on concrete treaty questions like tracking, capability measures, safeguards, and enforcement.


