Mixture of Experts

Why language models hallucinate, revisiting Amodei’s code prediction and AI in the job market

13 snips
Sep 12, 2025
Ailey McConnan, a tech news writer at IBM Think, shares the week's AI headlines, while Chris Hay, a distinguished engineer, dives deep into the intricacies of language model hallucinations and their implications for reliability. Skyler Speakman, a senior research scientist, discusses the evolving role of AI in coding jobs and the significant impact on the job market. They also explore the fascinating potential of running language models on ultra-compact hardware, reshaping how we think about AI technology in our everyday lives.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Verify Facts With Tools Not Internal Memory

  • Combine calibrated uncertainty estimates with evidence-checking tools (retrieval, symbolic methods, sanity checks) to detect hallucinations.
  • Prefer tool calls or RAG for recent or fact-based queries rather than relying solely on model internals.
INSIGHT

Hallucinations Fuel Creativity

  • Hallucinations can be a source of creativity when models mix concepts and propose novel ideas.
  • Removing all hallucination would eliminate playful and generative behaviors like persona-based outputs.
ANECDOTE

The 90% Code Prediction Revisited

  • Dario Amodei predicted AI would write 90% of code within months, prompting debate about automation versus augmentation.
  • Panelists note lots more AI-generated code exists, but developer roles and orchestration remain crucial.
Get the Snipd Podcast app to discover more snips from this episode
Get the app