
Data Skeptic Interpretability Tooling
Mar 13, 2020
Pramit Chaudhary, lead data scientist at H2O AI with expertise in model interpretability and AutoML. He discusses global vs local interpretation and the influence of LIME. He explains SCATER, a unified open source interpretability interface. He covers real-world uses in finance and healthcare, perturbation and robustness testing across text, image, and audio, and integrating interpretability into the model lifecycle.
AI Snips
Chapters
Transcript
Episode notes
Interpretability Discovered By Practical Need
- Pramit first encountered interpretability when his team wanted to query models directly rather than rely on distorted human reports.
- That practical need led him to discover the academic field of model interpretation.
Global And Local Interpretation Are Complementary
- Interpretation can be scoped both globally and locally to explain overall model behavior and individual predictions.
- Pramit Chaudhary emphasizes combining global feature importance with local, per-prediction explanations to get a full picture.
Build Consistent, Accessible Interpretation Tooling
- Unify scattered interpretability techniques into consistent tooling to help practitioners and researchers.
- Make interpretation accessible via easy interfaces so users can apply methods across model types.
