
Dev Interrupted Why AI-assisted PRs merge at half the rate of human code | LinearB’s 2026 Benchmarks
19 snips
Mar 24, 2026 They unpack LinearB’s 2026 benchmarks showing AI-assisted pull requests merge at far lower rates than human-written code. They compare unassisted, assisted, and fully agentic PR behaviors and explore why AI creates larger, slower-to-pickup changes. They highlight bottlenecks in review processes, the need for context engineering, and readiness gaps organizations must fix before AI boosts delivery.
AI Snips
Chapters
Transcript
Episode notes
Three PR Classes Reveal Different AI Behaviors
- LinearB classifies PRs into unassisted, AI-assisted, and agentic categories to compare behaviors.
- Agentic PRs are created entirely by an AI agent and are the least mature class observed in the data.
AI PRs Create Review Bottlenecks With Bigger Size
- AI-assisted PRs are larger and wait much longer for review pick-up than unassisted PRs.
- At P75, assisted PRs are ~2.5x larger and have pick-up times ~5x longer, creating review bottlenecks.
AI Pushes PR Size Past The 300 Line Practical Limit
- Typical AI-assisted PR P75 size is ~400 LOC versus 157 LOC for unassisted PRs, exceeding recommended 300 LOC threshold.
- Bigger PRs raise mental load, complexity, and slow timely, high-quality reviews.
