
The Andrew Klavan Show A Conversation on AI We Need To Have w/ Stephen Meyer & George D. Montañez
4 snips
Feb 25, 2026 George D. Montañez, Harvey Mudd CS professor who explains how large language models work. Stephen C. Meyer, scholar of origins and philosophy of mind. They probe AI vs human intelligence, limits of LLMs, meaning versus prediction, risks from model collapse and misuse, and who decides alignment. Short, sharp conversations about technical limits and real-world dangers.
AI Snips
Chapters
Books
Transcript
Episode notes
AI Augments Human Intelligence Not Replaces It
- AI functions mainly as augmentation rather than replacement of human intelligence.
- Stephen C. Meyer emphasizes dependence on human input and monitoring, noting LLMs require continuous training and correction to remain useful.
LLM Performance Is Jagged And Unpredictable
- LLM performance is jagged: models can fail on simple tasks yet solve very hard ones unpredictably.
- George D. Montañez notes a model might flub multi-digit arithmetic but solve International Math Olympiad problems.
LLMs Predict Tokens Not Understand Meaning
- Large language models predict the next token using statistical patterns from vast digitized text corpora.
- George D. Montañez explains embeddings map words to vectors so similarity is proximity, meaning the model has syntax without true semantics.




