
MLOps.community Getting Humans Out of the Way: How to Work with Teams of Agents
61 snips
Apr 7, 2026 Rob Ennals, creator of Broomy and Staff Software Engineer experienced in large-scale distributed systems, explains how to design systems where many agents run and self-validate in parallel. He covers visual screenshot QA, agent retry and verification loops, repo design and linting for agents, parallel agent selection, automated merge conflict handling, and UI/compute strategies for scaling agent teams.
AI Snips
Chapters
Transcript
Episode notes
Require Screenshot Walkthroughs For Fast QA
- Teach agents to produce feature walkthrough docs with cropped screenshots and explanatory text as part of validation.
- Have a separate sub-agent re-run the walkthrough (Playwright spec + pixel diffs) to confirm screenshots match and flag regressions.
Level Up Autonomy As Models Improve
- As models improve you can grant agents more autonomy and manage them at higher abstractions (from pair-programmer to team manager).
- Managing agents differs from humans: you can run them harder, waste their time, and experiment without social cost.
Automate Verification With Lints Tests And Readmes
- Build verification into the system so humans don't inspect every line: add custom lint rules, strict unit test coverage, and file/folder READMEs.
- Let agents write tests and lint rules to enforce style (e.g., max 50-line functions) automatically.
