
Podnosis Shadow AI: Dos and don’ts for healthcare orgs
How many doctors are using AI tools in a healthcare setting? It’s a question everyone wants answered, and one that’s impossible to pin down exactly. But recent data may offer a clue.
Wolters Kluwer, which creates evidence-based clinical decision support tools for physicians, conducted a survey of more than 500 health professionals, finding that 40% have encountered unauthorized AI tools at work and 17% have used them. But a blanket ban may not necessarily be the best approach. The unauthorized use of AI tools is referred to as “shadow AI” in a report on the findings.
To explain what organizations can do to protect themselves, their providers and their patients, Senior Writer Anastassia Gliadkovskaya speaks with Alex Tyrrell, CTO of Wolters Kluwer Health.
To learn more about the topics in this episode:
- 57% of healthcare professionals have encountered or used unauthorized AI tools at work
- ECRI flags misuse of AI chatbots as a top health tech hazard in 2026
- Microsoft unveils Copilot Health as an AI health companion for consumers
- OpenEvidence clinches $250M series D as AI platform sees explosive growth with doctors
- Some doctors are using public AI chatbots like ChatGPT in clinical decisions. Is it safe?
See omnystudio.com/listener for privacy information.
