
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) Trends in Machine Learning & Deep Learning with Zack Lipton - #334
Dec 30, 2019
In this engaging discussion, Zack Lipton, a Professor at CMU with expertise in machine learning, explores key advances from 2019 in the field. He delves into the evolution of deep learning, noting the impact of models like BERT and challenges related to distribution shifts. Lipton also discusses innovative approaches in causal inference and fairness, advocating for continued research on model robustness. Lastly, he shares predictions about commodification in AI and the need for inclusive participation in the future landscape of machine learning.
AI Snips
Chapters
Transcript
Episode notes
New Holdout Set Results
- Models evaluated on the new holdout sets performed worse but maintained their relative ranking.
- This suggests that benchmark chasing is less flawed than feared, but classifiers remain brittle to distribution shifts.
Reading Comprehension and NLI
- A study on reading comprehension datasets revealed that models could achieve high performance without fully reading the input.
- Similar issues were found in natural language inference, highlighting the gap between benchmark success and true competence.
BERT and Semi-Supervised Learning
- BERT leverages semi-supervised learning to improve NLP tasks by pretraining a language model on unlabeled data and fine-tuning it on labeled data.
- This approach led to significant performance gains across various NLP tasks, setting a new standard in the field.

