The Real Eisman Playbook

Daniel Guetta on the Guts of AI, Agentic AI & Why LLMs Hallucinate | The Real Eisman Playbook Ep 46

83 snips
Feb 16, 2026
Daniel Guetta, a Columbia Business School professor who built data and analytics programs and worked at Amazon and Palantir, breaks down how large language models function. He explains why LLMs hallucinate and how embeddings and attention work. He explores agentic AI, practical business uses like travel booking and spreadsheet agents, plus where companies capture value today.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Embeddings Encode Word Meaning

  • Embeddings map words and documents into high-dimensional numeric spaces so models can compute similarity and capture semantic relationships.
  • These emergent geometric relationships (e.g., king–queen parallels) arise from co-occurrence statistics across massive corpora.
INSIGHT

LLM Probabilities Differ From Human Reasoning

  • LLM internal probabilities can differ wildly from uniform human reasoning, showing they don't simulate random draws like humans expect.
  • Daniel Guetta stresses it's surprising they ever get correct answers given their statistical nature.
ANECDOTE

LLMs Boost Classical ML For Moderation

  • Companies use LLMs to augment classical machine learning for tasks like content moderation by extracting meaning from text.
  • Guetta gives examples where embeddings or LLM scores feed into existing models to flag risky content and reduce human review workload.
Get the Snipd Podcast app to discover more snips from this episode
Get the app