
Modern CTO Why Software Is Never “Done” Anymore with Iccha Sethi, SVP of Engineering at Vanta
12 snips
May 7, 2026 Iccha Sethi, SVP of Engineering at Vanta and seasoned leader from Atlassian and GitHub, explains why AI makes software never truly finished. She discusses model drift, evaluation frameworks, dashboards and alerts for AI features. Topics include CI evals, A/B model testing, team ownership for monitoring, and new skills like prompting and task chunking.
AI Snips
Chapters
Transcript
Episode notes
An AI Evaluation Maturity Model For Product Teams
- Vanta created an AI evaluation maturity model from traces to self-improving loops to standardize quality checks.
- Levels: traces, golden datasets built with SMEs, offline/online evaluators, experiments, and feedback loops for improvement.
Run Evaluators In CI And Experiment Continuously
- Run evaluators both in CI/CD and on a recurring or on-demand schedule, and run experiments comparing models and prompting tweaks.
- Use weekly experiments to test new models, context changes, temperatures, then feed findings into the golden dataset.
Measure Developer Health With Purposeful Metrics
- Use engineering metrics platforms like Span to track onboarding, PR velocity, investment mix, and AI tool impact.
- Iccha monitors time-to-first/10th merged PR, investment toward continuous improvement, AI adoption, PR size, and escape rate correlations.

