
The LRB Podcast Will the AI bubble burst?
79 snips
Jan 7, 2026 John Lanchester, a contributing editor at the London Review of Books and an author known for his insights on literature and finance, dives into the complexities of the AI financial bubble. He explores historical parallels with past economic bubbles and critiques the accuracy of the term 'artificial intelligence.' Lanchester discusses how AI firms might monopolize markets like Amazon did, and he raises concerns about the ethical implications and real harms of large language models. The conversation paints a vivid picture of what the future might hold as technology evolves.
AI Snips
Chapters
Books
Transcript
Episode notes
Cooking Test Reveals AI Confident Errors
- John tested ChatGPT with reverse-searing timings and it gave wrong cooking times, illustrating confident but incorrect outputs.
- He uses such errors to show how these models mimic human-like failure without understanding.
LLMs As Advanced Cut-and-Paste
- Large language models produce plausible text by predicting next tokens, effectively performing an advanced form of cut-and-paste.
- This explains impressive feats (bar exams) and fabricated, authoritative-sounding errors (made-up papers).
OpenAI Turned Safety Hype Into Investment Momentum
- OpenAI's narrative blended existential AI-safety rhetoric with aggressive fundraising and Microsoft partnership, turning research into a mainstream investment story.
- Personal splits (Musk, Amodei, Ilya) showed tensions between mission and capital needs.









