Beyond The Prompt - How to use AI in your company

Nobody Is Getting New Manager Training for Their AI Team - with Dan Klein, UC Berkeley

46 snips
Apr 15, 2026
Dan Klein, UC Berkeley professor and CTO at Scaled Cognition, studies NLP and building more reliable AI. He discusses why fluent AI can sound right but be wrong. He maps the jagged frontier of AI strengths and limits. He explains hallucinations as an inherent trade-off and argues working with AI requires new editing and verification skills.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Avoid Noisy Model-of-Model Chains

  • Chaining noisy models to check each other creates cascades of errors, high latency, and high token costs.
  • Klein recommends building models with determinism and control surfaces instead of model-of-model checking.
INSIGHT

Hallucination Is A Feature For Creative Tasks

  • Generative models shine when creativity and novel outputs are the goal; hallucination can be the product.
  • Klein contrasts image generation and story ideas where novelty (not exact factual recall) is the desired outcome.
INSIGHT

Scaling Hit Data Walls So New Ideas Matter

  • Model progress came from compressing decades of web text into scalable representations, but data walls limit unlimited returns from scale alone.
  • Klein notes scaling hit diminishing returns and new ideas are needed beyond training on more web data.
Get the Snipd Podcast app to discover more snips from this episode
Get the app