
Super Data Science: ML & AI Podcast with Jon Krohn 689: Observing LLMs in Production to Automatically Catch Issues
Jun 20, 2023
ML Observability experts, Amber Roberts and Xander Song, discuss drift detection, retraining models, model biases, and fairness in AI development. They cover Arize's open-source product, Phoenix, and the importance of monitoring production models for issues and scalability challenges with LLMs.
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9
Intro
00:00 • 3min
Training Session on Large Language Models and ML Observability Platforms
03:02 • 9min
Monitoring Drift in Machine Learning Models
11:43 • 10min
Differences Between ML Monitoring and ML Observability
21:20 • 7min
Exploring ML Observability and Introducing ARISE Commercial Platform and Phoenix
28:44 • 2min
Exploring Phoenix: Monitoring Machine Learning Models in Production
31:01 • 12min
Detecting Data Distribution Changes in Production and Scalability Challenges with LLMs
43:27 • 4min
Ensuring Model Safety and Performance in Production with ML Observability
47:10 • 27min
ML Observability and Monitoring Production Issues
01:13:41 • 4min
