
Works in Progress Podcast The algorithm will see you now: Why radiologists haven't been replaced by AI
24 snips
Mar 27, 2026 A deep dive into AI's real-world impact on medical imaging and why early benchmark wins did not equal clinical dominance. Discussion of commercial tools, narrow automation for single findings, and why models stumble outside test conditions. Coverage of data limits, training biases, regulatory lanes for assistive versus autonomous tools, and how institutional incentives shape adoption.
AI Snips
Chapters
Transcript
Episode notes
Radiology Is Designed For Automation
- Radiology is unusually well suited to automation because it uses digital images, clear benchmarks, and repeatable pattern-recognition tasks.
- Hundreds of FDA-cleared models exist (over 700) and many vendors claim superior benchmark accuracy, yet real-world adoption remains limited.
Benchmarks Don’t Translate To Clinical Work
- Benchmarks overstate real-world performance because models are trained on clean, unambiguous cases and often fail outside their test conditions.
- Radiologists spend most time on communication, oversight and non-diagnostic tasks, so accurate models replace only a fraction of their work.
Models Fail Across Hospital Sites
- Many approved models are validated on narrow datasets—38% tested on a single hospital—so performance can drop up to 20 percentage points out of sample.
- Differences in equipment, technique and recording create site-specific failure modes.
