
Data Skeptic Visualization and Interpretability
Jan 31, 2020
Enrico Bertini, an associate professor focused on data visualization and ML interpretability, and co-host of Data Stories. He discusses word cloud design and when simple charts like bar charts are better. He covers experimental methods for measuring visualization effectiveness. He explores visual tools for inspecting neural networks, surrogate decision trees, and strategies to scale and interact with complex models.
AI Snips
Chapters
Transcript
Episode notes
Interpretability Is A Human Task
- Interpretability ultimately happens on the human side and visualization is a core tool to present model behavior to people.
- Visualization researchers design representations and interactions to help humans explore complex decision spaces.
Visualization Effect Depends On Task
- Visualizations like word clouds vary in effectiveness depending on task and design choices.
- Word clouds can be good for search tasks but poor for precise quantitative judgments.
Design Then Test Visualizations
- Define the design space and tasks before choosing a visualization for evaluation.
- Run controlled experiments measuring performance metrics to compare visualization options.
