
Using AI at Work: AI in the Workplace & Generative AI for Business Leaders 90: Using AI at Work to Create an AI Quality Assurance System with Hernan Lardiez
19 snips
Feb 9, 2026 Hernan Lardiez, seasoned tech leader and COO of Ragmetrics, specializes in model evaluation and monitoring for safer generative AI. He discusses why measurement matters for production AI, what metrics to track like accuracy and drift, manual versus automated evals, who should own monitoring, and practical onboarding, pricing and ROI for continuous evaluation.
AI Snips
Chapters
Transcript
Episode notes
Define Metrics And Monitor For Drift
- Define what quality means for your use case and build metrics to measure it.
- Monitor accuracy over time to detect degradation and drift before it impacts customers.
Test Early And Continue Monitoring Live
- Test during development (pre-production) by injecting test datasets and grading outputs.
- Continue with live monitoring (post-production) to catch hallucinations and maintain quality.
Bank CIO's Fear Sparked The Project
- A Midwest bank CIO told Hernan he would not implement AI because he feared uncontrolled risk in a regulated market.
- That conversation motivated Hernan to build evaluation and monitoring tools to reduce that risk.
