LessWrong (30+ Karma)

LessWrong
undefined
Mar 28, 2026 • 6min

“Nick Bostrom: How big is the cosmic endowment?” by Zach Stein-Perlman

Superintelligence, pp. 122–3. 2014. Consider a technologically mature civilization capable of building sophisticated von Neumann probes of the kind discussed in the text. If these can travel at 50% of the speed of light, they can reach some stars before the cosmic expansion puts further acquisitions forever out of reach. At 99% of c, they could reach some stars. These travel speeds are energetically attainable using a small fraction of the resources available in the solar system. The impossibility of faster-than-light travel, combined with the positive cosmological constant (which causes the rate of cosmic expansion to accelerate), implies that these are close to upper bounds on how much stuff our descendants acquire. If we assume that 10% of stars have a planet that is—or could by means of terraforming be rendered—suitable for habitation by human-like creatures, and that it could then be home to a population of a billion individuals for a billion years (with a human life lasting a century), this suggests that around human lives could be created in the future by an Earth-originating intelligent civilization. There are, however, reasons to think this greatly underestimates the true number. By disassembling non-habitable planets and collecting matter from the [...] --- First published: March 28th, 2026 Source: https://www.lesswrong.com/posts/GLD5AiiQJqFbKX9vo/nick-bostrom-how-big-is-the-cosmic-endowment --- Narrated by TYPE III AUDIO.
undefined
Mar 28, 2026 • 7min

“Don’t Overdose Locally Beneficial Changes” by Mateusz Bagiński

A cautionary take on applying beneficial changes too extremely. Uses a calories analogy to show optimal amounts exist. Explores how context shifts change marginal utility. Highlights cases where helpful practices become harmful when pushed to the extreme. Surveys examples across meditation, polarization, AI thinking, and alarmism.
undefined
Mar 28, 2026 • 6min

“Stanley Milgram wasn’t pessimistic enough about human nature?” by David Gross

A reexamination of the Milgram experiment questions common readings of obedience and responsibility. The discussion covers agentic state theory, alternative motives like sadism, and Arendt’s critique of obedience as explanation. New reviews of audio tapes and procedural details suggest participants often broke rules in ways that affected outcomes.
undefined
Mar 28, 2026 • 5min

[Linkpost] “What if superintelligence is just weak?” by Simon Lermen

A critique of the idea that advanced AI must be omnipotent to pose risk. A tiger-cub metaphor shows how modest systems can scale into danger. Discussion of how automation and access, not dramatic breakthroughs, could create critical risks. Challenges the notion that distributing capabilities or monitoring multiple systems prevents catastrophe.
undefined
Mar 28, 2026 • 10min

“Pray for Casanova” by Tomás B.

A meditation on what happens when beauty is lost and how people cope, grow bitter, or become marked by revulsion. Historical portraits of Mary Wortley Montagu, John Wilmot, and Casanova explore decline, nostalgia, and social obsolescence. The piece questions whether reliving past pleasures is a kind of earned wireheading and probes plastic surgery, future restorative tech, and moral prayers for the marred.
undefined
Mar 28, 2026 • 56min

“AI #161 Part 1: 80,000 Interviews” by Zvi

A rapid tour of agentic coding breakthroughs, product updates, and debates over whether AI will replace entry-level white-collar work. Coverage of Anthropic’s 80,000 interviews about public attitudes toward AI and implications for productivity and risk. Discussion of deepfakes, phone-calling agents, OpenAI financing moves, and Elon’s chip plans. Light cultural jokes and audio highlights round it out.
undefined
Mar 27, 2026 • 58min

“Anthropic vs. DoW #6: The Court Rules” by Zvi

A recent court ruling handed Anthropic a preliminary injunction after a judge critically dismantled the government's case. Email exhibits and sworn testimony about negotiations and technical limits are recounted. The discussion covers legal strategies, procedural missteps, and the implications of a seven-day stay while potential appeals loom.
undefined
Mar 27, 2026 • 17min

“AI’s capability improvements haven’t come from it getting less affordable” by Anders Woodruff

A data-driven look at whether AI progress is becoming less affordable, using METR time-horizon trends and a cost-ratio definition. Clear breakdowns of how frontier models perform at 50% reliability and whether longer tasks drive gains. Discussion of fixed-cost horizons, inference-scaling effects, methodological limits, and why this analysis differs from other cost estimates.
undefined
Mar 27, 2026 • 9min

“ControlAI 2025 Impact Report” by Andrea_Miotti, Alex Amadori

A rapid rundown of ControlAI’s 2025 impact highlights and mission to avert superintelligence extinction risk. They recount large-scale lawmaker briefings, a UK coalition and parliamentary debates. The team’s scaling strategy in Canada and Germany and media and creator outreach are showcased. Plans to replicate the UK model internationally and push concrete 2026 policy actions are outlined.
undefined
Mar 27, 2026 • 6min

“Scaffolded Reproducers, Scaffolded Agents” by Mateusz Bagiński

Mateusz Bagiński, author who applies philosophical biology to agency, explores Godfrey-Smith's reproducer types. He explains simple, collective, and scaffolded reproducers with biological examples. He then maps those ideas onto agency, discusses LLMs as scaffolded agents, and probes when tool use counts as scaffolding. Short, provocative takes on what makes something a full agent.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app