
AI in Healthcare and Life Sciences Podcast AI That Clinicians Can Trust: Building Reliable Clinical Decision Support with Rhett Alden of Elsevier
Apr 2, 2026
Rhett Alden, CTO at Elsevier who builds transparent clinical AI, discusses trustworthy, EMR-integrated decision support to cut clinician burden. He covers risks like hallucinations and dosing errors. He highlights ambient documentation, governance and training, plus adding wearables and social data to improve care.
AI Snips
Chapters
Transcript
Episode notes
Clinician Trust Is The Primary Adoption Barrier
- Clinicians' main barrier to AI adoption is epistemic trust: they fear misinformation and need verifiable, best-practice answers.
- Rhett Alden cites a survey where 74% of U.S. clinicians worry about misinformation and worry about being trained to use tools effectively.
Small AI Errors Can Be Fatal In Medicine
- Errors or hallucinations in clinical AI can be life-threatening, making precision non-negotiable.
- Rhett Alden gives dosing examples where moving a decimal or changing units could cause death to show stakes.
Attach Peer Reviewed References To Every Claim
- Build AI with transparency and peer-reviewed backing so clinicians can verify each claim.
- Elsevier's systems attach multiple peer-reviewed references to every recommendation to provide epistemic trust, says Rhett Alden.

