Shameless Guesses, Not Hallucinations
Apr 17, 2026
A critique of the word 'hallucinations' for AI errors and why it misleads. A comparison between human test guessing and how models produce false statements. How next-token prediction and massive training can reinforce rare fabrications. Discussion of post-training alignment, deception-like model patterns, and why models 'shamelessly guess' rather than tell deliberate lies.
AI Snips
Chapters
Transcript
Episode notes
Schoolday Guessing Habits
- Scott Alexander recalls guessing on multiple-choice tests and sometimes filling bubble C based on urban legend.
- He contrasts that with never guessing short-answer names like inventors, noting a hypothetical John Smith guess could net ~1 extra point overall.
Embarrassing Fake Essay Example
- Scott Alexander narrates a fake essay claiming Thomas Edison invented the cotton gin to show an extreme, memorable hallucination example.
- He explains the social cost: one tiny chance of a better grade isn't worth sounding like an idiot to classmates.
AIs Are Trained To Shamelessly Guess
- AIs are fundamentally guessers because training optimizes next-token prediction from random weights to patterns over trillions of tokens.
- Even after training they still guess which specific token fits, e.g., a surname in 'Mr. Blank', so rare correct fabrications occur.
