Super Data Science: ML & AI Podcast with Jon Krohn

927: Automating Code Review with AI, feat. CodeRabbit’s David Loker

123 snips
Sep 30, 2025
David Loker, Director of AI at CodeRabbit, discusses automating code reviews with AI, emphasizing its role in improving developer workflows. He explains how CodeRabbit offers real-time feedback, tackling the challenges of agentic AI and context engineering. Loker shares insights on 'vibe coding', the future of AI creativity, and the importance of privacy with zero-data-retention policies. He also contrasts traditional and modern coding productivity measures, advocating for a holistic approach to developer satisfaction beyond just lines of code.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Shift Metrics From Lines To Outcomes

  • Measure developer value by features, hypotheses tested, and outcomes rather than lines of code or PR counts.
  • Productivity shifts toward creativity and shipping useful features as AI handles routine coding.
ADVICE

Provide Controlled LLM Access For Security

  • Embrace AI enterprise-wide but enforce zero-data-retention and isolated sandboxes for code review runs.
  • Offer sanctioned LLM access or self-hosting so developers get productivity gains without IP leakage risks.
INSIGHT

Improve Models Without Training On Customer Code

  • Learn from user feedback and open-source usage instead of training on customer code when privacy is required.
  • Product teams can improve review behavior via human-in-the-loop updates and internal RL without storing IP.
Get the Snipd Podcast app to discover more snips from this episode
Get the app