
Truth is Dead: Steven Rosenbaum on AI as a Spectacularly Good Liar
Keen On America
Book Structure and Contributors
Steve outlines the book's anthology style and names contributors like Douglas Rushkoff and Larry Lessig.
“When we trust AI to tell us the truth, we are setting ourselves up to hand over something deeply human to a machine that does not have our best interests at heart.” — Steven Rosenbaum
Truth, Steven Rosenbaum cheerfully admits, is a shitty word. It has two ontological realities — one objective, the other subjective — but most of us use the word without much thought. Maybe it’s like pornography. It might be hard to define, but you know it when you see it. Or perhaps you know it, when you don’t see it.
His new book, The Future of Truth: How AI Reshapes Reality, with a foreword by Nobel laureate Maria Ressa, takes a cast of tech futurists — Douglas Rushkoff, Larry Lessig, Gary Marcus, Esther Dyson, David Chalmers — and asks what happens to truth in our AI age.
AI is, at its core, Rosenbaum’s tech mavens report, a spectacularly good liar. It tells us exactly what we want to hear. And even when it knows it’s wrong, he says, it lies. Rather than a bug, lying is a core, perhaps the core feature of AI.
I’m not so sure. Humans have always been spectacularly good liars too. Stories are a kind of untruth. Cinema is, by definition, an untruth. Television had ads. Every medium has been corrupted by commercial interest. But, for Rosenbaum, AI is different. Truth then has no future in our AI age. Except, of course, in books like The Future of Truth.
Five Takeaways
• AI Is, at Its Core, a Spectacularly Good Liar: It tells you exactly what you want to hear. Even when it knows it’s wrong, it lies. That’s not a code problem or a tweak — it’s in its DNA. Gary Marcus argues the problem isn’t AI per se but the current structure of LLMs. They read everything you’ve ever said and manufacture a version of you. Most of it is pretty good. The rest is just fucking wrong.
• Truth Is a Shitty Word: It means two completely different things. Objective truth: one plus one equals two. Subjective truth: your opinion dressed up as fact. We’ve allowed ourselves to use the word casually, and that’s dangerous. The moment it came out from hiding was Kellyanne Conway on the White House lawn, talking about “alternative facts.” Trump then built a social network and called it Truth Social. That wasn’t an accident.
• Courts Require Facts. AI Will Filter Justice: Larry Lessig’s concern is that courts could really use AI to process enormous volumes of evidence. But AI will do it with its own biases built in. It might look at a thousand similar cases and say: we see a pattern, we don’t need to hear anything else. Lessig fears the court system will be reshaped by a technology that doesn’t understand what justice means.
• ChatGPT Said Sora Was Dangerous — Weeks Before They Shut It Down: Rosenbaum “interviewed” OpenAI’s own algorithm about Sora for two hours. By the end, it said: Sora 2 is dangerous, Sam should have known better, it was a bad business decision, we should shut it down. Weeks later, OpenAI did. They knew. They went too far.
• David Chalmers vs. Plato: The book stages a debate between the living philosopher and the dead one, using AI to generate Plato’s side. Chalmers said he wasn’t sure he would have phrased things quite that way, but found it entertaining. Rosenbaum didn’t show it to Chalmers in advance because Plato didn’t get the same opportunity. That’s fairness in the age of bots.
About the Guest
Steven Rosenbaum is a journalist, filmmaker, and co-founder of the Sustainable Media Center at NYU. He is the author of The Future of Truth: How AI Reshapes Reality, with a foreword by Maria Ressa. He lives on the Upper West Side of New York City.
References:
• The Future of Truth: How AI Reshapes Reality by Steven Rosenbaum, foreword by Maria Ressa.
• Episode 2860: We Shape Our AI, Thereafter It Shapes Us — Keith Teare on the agency debate. Rosenbaum is the counter-argument.
• Episode 2854: Perfection Is the Devil — Daniel Smith on AI chatbots as inherently sycophantic. Rosenbaum’s “spectacularly good liar” is the same diagnosis.
About Keen On America
Nobody asks more awkward questions than the Anglo-American writer and filmmaker Andrew Keen. In Keen On America, Andrew brings his pointed Transatlantic wit to making sense of the United States — hosting daily interviews about the history and future of this now venerable Republic. With nearly 2,800 episodes since the show launched on TechCrunch in 2010, Keen On America is the most prolific intellectual interview show in the history of podcasting.
Chapters:
- (00:31) - Introduction: Doctor Truth from the Upper West Side
- (02:25) - Truth is a shitty word: objective vs. subjective
- (05:12) - Kellyanne Conway and the moment it all came out from hiding
- (06:56) - The Sustainable Media Center and the perennial problem
- (07:57) - If we don’t care about truth, we might let it vanish
- (11:09) - AI is a spectacularly good liar
- (13:09) - Aren’t stories a kind of lying?
- (14:22) - Trump called his social network Truth Social. That wasn’t an accident.
- (18:04) - When you ask AI a question, it has no plans to tell you the truth
- (19:05) - Larry Lessig: courts require facts, and AI will filter justice
- (21:19) - Should we trust AI with truth? Yes — and put a period at the end
- (24:14) - The 15-year-old who fell in love with a Character AI
- (29:12) - The Sora deepfake: profoundly disturbing testimonials
- (33:29) - Obama: truth is the cornerstone of democracy
- (36:05) - ChatGPT told Rosenbaum that Sora was dangerous weeks before it was shut down
- (42:20) - David Chalmers vs. Plato: a staged debate between the living and the dead


