Machine Learning Street Talk (MLST)

Explainability, Reasoning, Priors and GPT-3

Sep 16, 2020
Dr. Keith Duggar, MIT PhD and AI expert, joins for a captivating discussion on explainability in machine learning. They dive into Christoph Molnar's insights on interpretability and the intricacies of neural networks' reasoning. Duggar contrasts priors with experience, touches on core knowledge, and critiques deep learning through notable figures like Gary Marcus. The conversation culminates in exploring ethical implications and challenges of GPT-3's reasoning, highlighting the broader questions of machine intelligence and the future of AI.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Confirmation Bias and Explanations

  • Good explanations should align with prior beliefs, even if those beliefs are illogical.
  • Confirmation bias in humans is analogous to a conservative learning rate in machine learning models.
INSIGHT

Trustworthy Explanations and Transparency

  • Trustworthy explanations enhance trust in a process, like credit card applications.
  • However, simpler, transparent models might not be as accurate as complex ones.
ADVICE

Prioritize Different Knowledge

  • Improve models by adding different kinds of knowledge rather than increasing complexity.
  • Focus on encoding different types of structuring, like affine transformations or polynomial functions.
Get the Snipd Podcast app to discover more snips from this episode
Get the app