
The Real Eisman Playbook Gary Marcus on the Massive Problems Facing AI & LLM Scaling | The Real Eisman Playbook Episode 42
44 snips
Jan 19, 2026 Gary Marcus, a cognitive scientist and AI researcher, dives into the challenges facing AI today, particularly with large language models (LLMs). He critiques LLMs for their limitations, emphasizing diminishing returns in performance. Marcus discusses the phenomenon of AI hallucinations, where models confidently generate false information, and highlights the risks involved. He advocates for integrating symbolic components into AI systems to improve reliability and calls for a shift towards diverse foundational research in the AI community.
AI Snips
Chapters
Books
Transcript
Episode notes
Novelty And Cutoff Dates Break LLMs
- LLMs have a cutoff date in training data and struggle with novelty or new events.
- Band-aids like web search are poorly integrated and don't reliably fix real-time accuracy gaps.
Tesla Summon Hits A Jet
- Marcus recounts a Tesla 'summon' incident where a car hit a multimillion-dollar jet at an airshow.
- The car lacked a world model for unusual obstacles and failed to recognize the jet as a novel object to avoid.
Community Turns Skeptical On Scaling
- The AI research community is shifting away from unquestioning LLM optimism toward skepticism.
- Surveys and prominent researchers now doubt scaling alone will produce general intelligence.







