
Machine Learning Street Talk (MLST) Jordan Edwards: ML Engineering and DevOps on AzureML
Jun 3, 2020
Jordan Edwards, Principal Program Manager for AzureML at Microsoft, dives into the world of ML DevOps and the challenges of deploying machine learning models. He discusses how to bridge the gap between science and engineering, emphasizing model governance and testing. Jordan shares insights from the recent Microsoft Build conference, highlighting innovations like FairLearn and GPT-3. He also introduces his maturity model for ML DevOps and explores the complexities of collaboration in machine learning workflows, making for a thought-provoking conversation.
AI Snips
Chapters
Books
Transcript
Episode notes
ML Requires a Team
- Production enterprise ML requires diverse expertise, including data engineers, data scientists, and other specialists.
- No single person can handle all aspects of this complex process.
Model Testing
- Use golden test datasets for basic model validation across expected scenarios.
- Utilize tools like InterpretML, SHAP, and Lime for model interpretability and bias detection.
Fairness Analysis
- Training-time fairness analysis and bias busting are valuable tools for responsible AI.
- Production model fairness monitoring is often reactive, relying on reports or labeling feedback.




