

AI Article Readings
Readings of great articles in AI voices
Readings of great articles in AI voices askwhocastsai.substack.com
Episodes
Mentioned books

Mar 29, 2026 • 18min
dark ilan - By Ozy Brennan
dark ilan - By Ozy Brennanhttps://open.substack.com/pub/ozybrennan/p/dark-ilan?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe

Mar 26, 2026 • 20min
Every ACX House Party - By Corvin
This post by Corvin is a pastiche of the ACX Bay Area House Party Series. https://open.substack.com/pub/ravenstales/p/every-acx-house-party?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe

Mar 26, 2026 • 7min
Every Debate On Pausing AI - By Scott Alexander
Every Debate On Pausing AI - By Scott Alexanderhttps://open.substack.com/pub/astralcodexten/p/every-debate-on-pausing-ai?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe

Mar 23, 2026 • 33min
Being John Rawls - By Scott Alexander
“Full Cast” AI reading of Being John Rawls - By Scott Alexander. * 00:00 - Introduction* 03:23 - II* 11:27 - III* 20:21 - IV* 25:00 - V* 31:17 - VIhttps://open.substack.com/pub/astralcodexten/p/being-john-rawls?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe

Mar 16, 2026 • 21min
Polly Wants a Better Argument, By SE Gyges
In this article, SE Gyges argues that the widely cited “stochastic parrots” critique of large language models is not only outdated but actively harmful to serious discussion of AI. The piece examines how the argument misunderstands modern AI systems, ignores advances like multimodal training and reinforcement learning, and rests on a narrow definition of “meaning.” By walking through both empirical evidence and conceptual flaws in the original claim, Gyges contends that dismissing LLMs as mere parrots prevents society from grappling with the real ethical and political challenges posed by systems that demonstrably do work. * 00:00 - Introduction* 02:31 - Even If True, The Argument Is Irrelevant* 03:32 - The Argument Doesn’t Apply to Any Major Model Since 2023* 06:45 - The Argument Was Already Obsolete When Published* 08:05 - The Argument Is Empirically False* 08:19 - The Octopus Test* 12:05 - The Platonic Representation Hypothesis* 13:24 - Form Carries Meaning* 15:48 - The Argument Is Badly Constructed* 16:07 - Parrots Are Amazing, Actually* 16:56 - The Definition of Meaning Is Circular* 19:28 - Conclusionhttps://open.substack.com/pub/verysane/p/polly-wants-a-better-argument?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe

Mar 13, 2026 • 32min
Why ATMs didn’t kill bank teller jobs, but the iPhone did - By David Oks
In this article David Oks takes a familiar story about technology and jobs, the idea that ATMs automated banking without destroying teller work, and turns it on its head, arguing that the real disruption came later from the smartphone era. Using the history of bank branches, bank tellers, and mobile banking, he explores a broader point about technological change: that the biggest effects often come not when a new tool replaces part of a job, but when it creates an entirely new way of doing things that makes the old role far less necessary. * 00:00 - Introduction* 07:17 - ATMs didn’t kill bank teller jobs* 20:32 - But iPhones actually did* 26:25 - Automating a job is much harder than making it irrelevanthttps://open.substack.com/pub/davidoks/p/why-the-atm-didnt-kill-bank-teller?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe

Mar 6, 2026 • 32min
The Elect - By Tomás Bjartur
A short story by Tomás Bjarturhttps://open.substack.com/pub/tomasbjartur/p/the-elect?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe

Mar 2, 2026 • 22min
Clawed - By Dean W. Ball
In this post Dean W. Ball explores the gradual nature of life and death, drawing a poignant parallel between the passing of his father and the ongoing decline of the American republic. Using a recent policy skirmish between the AI firm Anthropic and the U.S. Department of War over the military deployment of the Claude AI system as a focal point, he examines the shifting dynamics of government power and private enterprise. Ultimately, he invites readers to look beyond traditional partisan divides and carefully consider how the control of frontier AI will shape the future of human liberty.* 00:00 - Introduction* 00:05 - One* 02:22 - Two* 04:52 - Three* 06:52 - Four* 18:19 - Fivehttps://open.substack.com/pub/hyperdimensional/p/clawed?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe

Mar 1, 2026 • 21min
"All Lawful Use": Much More Than You Wanted To Know - By Scott Alexander
In this post, Scott Alexander examines the legal and contractual implications of the Department of War's "all lawful use" demand for AI systems, breaking down what US law actually permits regarding mass domestic surveillance and autonomous weapons, and why the phrase "lawful use" provides far less protection than most people assume.* 00:00 - Introduction* 02:42 - Mass domestic surveillance: more than you wanted to know* 08:59 - Autonomous weapons: more than you wanted to know* 13:21 - Comments on OpenAI’s FAQ* 17:51 - Questions that you should be askinghttps://open.substack.com/pub/astralcodexten/p/all-lawful-use-much-more-than-you?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe

Feb 24, 2026 • 57min
THE 2028 GLOBAL INTELLIGENCE CRISIS
In this essay, Citrini and Alap Shah construct a fictional macro memo written from the perspective of June 2028, using the format of financial retrospective analysis to explore a single underexamined scenario: what happens when AI adoption succeeds beyond all expectations, and that success becomes the source of catastrophic economic disruption. The piece traces how accelerating AI capability interacts with the structures of the white-collar labour market, corporate spending, consumer demand, credit markets, and government fiscal policy — identifying the feedback loops that connect each layer into a single, self-reinforcing system. The authors are explicit that this is a thought exercise rather than a forecast, and the essay closes by returning the reader to February 2026, framing the scenario as a risk to model and prepare for rather than a fate already in motion.* 00:00 - Introduction* 00:56 - Macro Memo* 00:57 - The Consequences of Abundant Intelligence* 05:33 - How It Started* 10:18 - When Friction Went to Zero* 19:17 - From Sector Risk to Systemic Risk* 27:47 - The Intelligence Displacement Spiral* 32:45 - The Daisy Chain of Correlated Bets* 47:34 - The Battle Against Time* 54:12 - The Intelligence Premium Unwind* 56:43 - Acknowledgementshttps://open.substack.com/pub/citrini/p/2028gic?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe


