PurePerformance

AI-Ready Codebases: Engineering Discipline for Agentic AI with Adam Tornhill

4 snips
Mar 30, 2026
Adam Tornhill, programmer and author of Your Code as a Crime Scene and founder of CodeScene, brings behavioral code analysis expertise. He discusses how legacy and low-quality code slow AI, measuring “AI-readiness” with Code Health, practical guardrails and refactoring patterns, and how testing and governance become critical when scaling agentic AI.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Code Health Determines AI Effectiveness

  • AI performance depends heavily on code quality, and typical industry code is often far below what's needed for reliable agentic AI.
  • CodeScene found average Code Health ≈5.15 and recommends ~9.5+ Code Health to fully accelerate with agentic AI.
INSIGHT

Experience Amplifies Agentic AI Value

  • Experienced engineers gain more from agentic AI because their architecture and validation skills let them scale larger iterations.
  • Adam stopped writing manual code after decades but relies on his background to validate and guide AI output.
ADVICE

Coach Agents On Test Quality Patterns

  • Teach agents specific internal-quality patterns, especially for tests, since test code is often the weakest and reflects poor LLM training data.
  • Encode code-health rules and test patterns so agents can mimic high-quality test frameworks.
Get the Snipd Podcast app to discover more snips from this episode
Get the app