Troubleshooting Agile

Agentic Validation and the Power of Loops

12 snips
Apr 15, 2026
They debate organisations rushing AI rollouts without learning change-management lessons. They explore when to automate status reporting with LLMs and when that misses the point. They introduce agentic validation for documenting predictions and checking outcomes over time. They unpack monitoring-driven rollouts, rollback safeguards, and iterative AI loops that experiment and refine results.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI Rollouts Are Repeating Old Change Mistakes

  • Organisations are repeating old change-management mistakes by treating AI rollouts as purely technical deployments rather than learning from past Agile adoption lessons.
  • Jeffrey Fredrick heard teams focus on automating artifacts (like weekly reports) instead of re-examining the underlying purpose of those artifacts, which risks preserving pointless bureaucracy.
ADVICE

Reassess Before Automating Status Reports

  • Do re-evaluate the purpose of existing artifacts before automating them with LLMs rather than just recreating historical reports.
  • Jeffrey Fredrick warns that automating a status report may remove the human engagement that the report originally enforced.
INSIGHT

Use AI For Routine Counting Tasks

  • Some projects are like brick walls where progress equals simple counts, and computers excel at consistent, repeatable measurement.
  • Douglas Squirrel argues that for well-understood, countable work, LLMs generating and checking reports can be perfectly appropriate.
Get the Snipd Podcast app to discover more snips from this episode
Get the app