LessWrong (30+ Karma) cover image

“Paper close reading: “Why Language Models Hallucinate”” by LawrenceC

LessWrong (30+ Karma)

00:00

Defining Hallucinations as Guessing Under Uncertainty

Lawrence C reads the abstract and frames hallucinations as plausible but incorrect guesses when models are uncertain.

Play episode from 01:25
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app