The Gradient: Perspectives on AI

Martin Wattenberg: ML Visualization and Interpretability

22 snips
Nov 16, 2023
Martin Wattenberg, a professor at Harvard and co-founder of Google Research's People + AI Research (PAIR) initiative, discusses his background in ML visualization, skepticism of neural networks in the 1980s, organization of information in graphics, progressive disclosure of complexity in interface design, evolutionary conversation interfaces, developing tools for model understanding, and creating trust in ML systems.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Shared Representations in Translation

  • Multilingual translation models develop shared internal representations mapping similar meanings across languages.
  • Visualization of these shared spaces helps predict translation quality and reveals universal language-like structures.
INSIGHT

Calibrate Trust with Interpretability

  • Interpretability tools like TCAV explain model behavior by linking directions in embedding space to concepts.
  • Trust calibration, not blind trust, is critical; users must understand limits and when models may be wrong.
INSIGHT

Rethink AI Interaction Metaphors

  • Users often misplace trust in language models due to confident presentation despite probabilistic errors.
  • Effective interaction requires new metaphors beyond 'person' or 'tool' for these AI systems.
Get the Snipd Podcast app to discover more snips from this episode
Get the app