
Data Skeptic Interpretability Practitioners
Jun 26, 2020
Sungsoo Ray Hong, an assistant professor studying human–computer interaction and interpretability, joins to discuss industry practices and challenges around model explainability. He covers how HCI frames interpretability, study design with practitioners, differences between blackbox and whitebox needs, tooling and workflow pain points, scalability hurdles, and promising design directions.
AI Snips
Chapters
Transcript
Episode notes
Practitioners Face High‑Stakes Examples
- Participants were seasoned data scientists working on varied models from linear regression to deep nets.
- Their interpretability work often involved high‑stakes domains like medical decisions and large social platforms.
Edge Cases Drive Interpretability Work
- Practitioners use interpretability primarily to detect edge cases and explain surprising behavior.
- Data scientists must also communicate findings to collaborators with varied ML knowledge.
Limits Of Global Explanations For Black Boxes
- Opinions diverge on whether global explanations for black‑box models are feasible or useful.
- Experts note human cognitive limits make full understanding of complex neural nets unrealistic.
