
Health Wanted DIY Health Care
13 snips
Jan 23, 2026 Dr. Stephen Lin, a family physician and Stanford faculty who leads the HEART team researching AI in primary care, discusses AI's role in clinical workflows. He describes using AI to cut clinician admin work and the need for physician oversight. He warns about patient-facing chatbots, legal and privacy gaps, and urges transparency, consent, and cautious optimism about AI in routine care.
AI Snips
Chapters
Transcript
Episode notes
Beware Broad Direct-To-Consumer Testing
- Direct-to-consumer lab and MRI subscription services offer many tests but often lack medical indication.
- Laurel Bristow warns over-testing can cause anxiety and lead to unnecessary invasive follow-ups.
High Usage, Mixed Accuracy For Health Chatbots
- Millions use chatbots for health queries and some get answers for rare conditions.
- However, recent studies show LLMs provided potentially harmful health information a significant fraction of the time.
Chatbots Feel Smart But Can Mislead
- Patient-facing health chatbots are attractive because people seek quick answers and reassurance.
- Stephen Lin warns LLMs are next-word predictors and can appear confidently wrong, risking misinformation.
