
On Point with Meghna Chakrabarti How to make AI work for us
20 snips
Mar 30, 2026 Gary Marcus, cognitive scientist and AI entrepreneur, warns about concentrated power, hallucinations, and overhyped claims. He discusses threats from disinformation to bioweapons, how AI erodes critical thinking in education, and why we need independent regulation. He weighs realistic medical uses, the limits of current models, and practical steps citizens can take.
AI Snips
Chapters
Books
Transcript
Episode notes
AI Harms Reach Beyond Job Loss
- Most people are affected by AI risks even if their jobs aren't directly threatened, via misinformation, deepfake harms, and educational decline.
- Marcus highlights non-consensual deepfake porn and 'cognitive surrender' where students stop learning because they rely on chatbots.
The Henrietta Hallucination Story
- Gary Marcus recounts receiving AI-written biographies that falsely claimed he had a pet chicken named Henrietta.
- He uses this to show large language models cluster facts and overgeneralize, causing confident but false hallucinations.
Hallucinations Persist Despite Hype
- Hallucinations persist despite hype that models can self-improve or 'think' like humans; architecture limits remain.
- Marcus cites Stanford finding models sometimes analyze images they were not even given, showing fundamental flaws.




