Product Thinking

Episode 261: AI Implementation in Regulated and High-Trust Industries

33 snips
Jan 28, 2026
Jessica Hall, CPO at Just Eat Takeaway, discusses AI trade-offs and long-term capability building. Magda Armbruster, Head of Product at Natural Cycles, explains embedding QA, regulation, and privacy into daily product work. Maryam Ashoori, AI product expert from IBM Watsonx, breaks down how agents reason, why LLMs hallucinate, and the need for guardrails and human oversight. They focus on risk, cost, simplicity, and governance.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Why LLM Hallucinations Happen

  • LLMs predict next-token probabilities rather than reason like humans, which explains confident but incorrect outputs.
  • Maryam Ashoori notes hallucinations arise because statistical token prediction can produce plausible but false text.
ADVICE

Build Guardrails And Human Escalation

  • Design agentic guardrails that enforce faithfulness and escalate high-risk queries to humans.
  • Put humans in the loop for sensitive topics like medical allergies or other high-accuracy needs.
ANECDOTE

Embedding Compliance Into Product Work

  • Natural Cycles embeds QA, regulatory, and compliance into day-to-day product work from brainstorming onward.
  • Dr. Magda Armbruster describes QA and compliance attending meetings and shaping product decisions early.
Get the Snipd Podcast app to discover more snips from this episode
Get the app