
The Innovation Engine Podcast 214. A Generative AI Primer for Healthcare Business Leaders
Sep 3, 2024
David Evans, Director of Global Innovation who builds RAG architectures and cost-control AI systems, and Pankaj Chawla, Chief Innovation Officer focused on responsible GenAI in healthcare, discuss retrieval-augmented generation, why hallucinations occur and how to prevent them, cost-control strategies for hosted models, and architectures to protect patient data and privacy.
AI Snips
Chapters
Transcript
Episode notes
Why LLMs Hallucinate
- Hallucinations occur because LLMs predict the next token, not verify facts.
- Pankaj Chawla explains LLMs generate text by predicting next words and can drift from the original intent, producing incorrect answers.
Ground LLMs Using RAG
- Use retrieval-augmented generation (RAG) to ground LLM responses in curated data sources.
- David Evans describes pairing trusted fee schedules and procedure ontologies with an LLM so answers come from a factual source.
Return Citations And Measure RAG Quality
- Provide citations and measure response quality with a RAG assessment framework.
- Pankaj Chawla and David Evans recommend returning source citations and running human-validated RAG metrics to detect noise and verbosity.


