Data Skeptic

Interpretability

Jan 7, 2020
Christoph Molnar, PhD researcher in statistics at LMU Munich and author of Interpretable Machine Learning. He defines what interpretability means, who benefits from explanation tools, and when models become hard to understand. He contrasts simple models with complex ones, explains common explanation techniques like sensitivity analyses and RuleFit, and discusses limits and future directions for explainability research.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ADVICE

Use Relatable Datasets For Practice

  • Use relatable datasets (e.g., bike-sharing) to practice interpretability and spot intuitive patterns.
  • Visualize effects like temperature versus rentals to validate model explanations against domain knowledge.
INSIGHT

Sensitivity Analysis Underlies Many Methods

  • Many model-agnostic interpretability methods perform sensitivity analysis by perturbing inputs and measuring output changes.
  • Some methods (e.g., SHAP, LIME) share unifying representations despite different origins.
ADVICE

Pick Methods By Data And Access

  • Prefer model-agnostic methods for tabular data and model-specific ones when you need deeper access (e.g., gradients for saliency).
  • Beware interpreting models trained on uninterpretable features like embeddings; input interpretability matters.
Get the Snipd Podcast app to discover more snips from this episode
Get the app