

Dev Interrupted
LinearB
Software itself is fundamentally changing. We explore the transition to agentic orchestration, vibe coding, and AI-native development, grounding the conversation in the principles that have always defined great engineering.On Tuesdays, we interview the founders, architects, and builders of the world’s most impactful tech to uncover the timeless engineering principles and strategies shaping the next era of development.And on Fridays, we drop an end-of-week roundup of the biggest news in AI and software, and what it actually means for your career, your craft, and your life as a developer.Subscribe to stay ahead of the next era of code.
Episodes
Mentioned books

Apr 7, 2026 • 42min
Stop measuring AI adoption. Start measuring AI impact. | LinearB’s APEX framework
They introduce APEX, a framework for measuring AI's real impact on software delivery. Conversations cover why DORA and SPACE fall short for AI-driven work. They explain treating AI as a first-class SDLC contributor and avoiding the illusion of coding speed. The four APEX pillars—AI Leverage, Predictability, Efficiency, and Developer experience—shape where teams should focus first.

32 snips
Apr 3, 2026 • 36min
Virtual pets in your terminal, ads in your pull request, & no more CSS in your browser?
They debate ads appearing inside pull requests and the trust problems that creates. They unpack a massive code leak that revealed hidden model features and implementation gaps. They explain Shopify’s play to cut AI inference costs by 75x with smaller self-hosted models. They explore Pretext’s trick for measuring text before the DOM and playful terminal “virtual pets” from leaked tooling.

40 snips
Mar 31, 2026 • 37min
Retrofit or reimagine? Developer environments for humans and agents | Ona’s Matt Boyle
Matt Boyle, Head of Product, Design, and Engineering at Ona — former Gitpod leader focused on cloud dev environments and agent security. He talks about reimagining ephemeral workspaces for both humans and AI, kernel-level runtime controls like Project Veto, integrating agents with internal tools, and the decline of the traditional desktop IDE as web workspaces and agent loops reshape developer workflows.

14 snips
Mar 27, 2026 • 30min
The T-shaped leader, Disney can’t catch a break, and will you trust Auto mode?
They unpack why AI video products struggle and why a big OpenAI-Disney deal fell apart. The conversation dives into Claude Code’s Auto Mode, permissioning, and the dangers of unscoped agents. Listen as they debate YOLO trade-offs, harness engineering, and the rise of T-shaped engineers and leaders reshaping software roles.

22 snips
Mar 24, 2026 • 40min
Why AI-assisted PRs merge at half the rate of human code | LinearB’s 2026 Benchmarks
They unpack LinearB’s 2026 benchmarks showing AI-assisted pull requests merge at far lower rates than human-written code. They compare unassisted, assisted, and fully agentic PR behaviors and explore why AI creates larger, slower-to-pickup changes. They highlight bottlenecks in review processes, the need for context engineering, and readiness gaps organizations must fix before AI boosts delivery.

46 snips
Mar 20, 2026 • 32min
Sloppypasta culprits, unpacking MCP’s spotlight, and Anthropic wants your agents to work the graveyard shift
They debate whether the Model Context Protocol was overhyped and why CLIs are making a comeback. They explain context anchoring to prevent model compaction during long coding sessions. They warn about AI amplifying bad processes when teams optimize the wrong bottlenecks. They celebrate the nostalgic resurgence of the decentralized small web. They call out ‘sloppy pasta’ and how to avoid it.

19 snips
Mar 17, 2026 • 40min
Many tokens make all bugs shallow & open source’s new maintainers | Chainguard's Dan Lorenc
Dan Lorenc, co-founder and CEO of Chainguard, secures the software supply chain and researches agentic engineering. He discusses how autonomous agents are accelerating development and the security risks that follow. Topics include turning guardrails into reliable guide rails, agent-driven open source maintenance, many-tokens automated inspection, sandboxing teams, and which parts of the stack agents will replace.

22 snips
Mar 13, 2026 • 22min
Inference is the new 401k matching and what we’re learning from AI-related outages
They debate the idea of paying engineers with AI compute instead of cash and whether inference costs should fall on employees. They unpack the rise of harness engineering and how planning, docs, and guardrails shape agentic teams. They examine recent AI-related outages and AWS incidents and warn about the pressure to run dozens of autonomous agents. They also share personal agent experiments and laptop-permission risks.

47 snips
Mar 10, 2026 • 40min
Your engineers need an AI control plane, not more tools | Guild.ai’s James Everingham
James Everingham, former Head of Dev Infra at Meta and CEO of Guild.ai, builds enterprise control planes for safe, auditable AI in development. He talks about weaving AI into the software lifecycle, developer-driven emergent agents like onboarding and risk scoring, why top-down mandates fail, and the need for a central AI control plane to govern, log, and scale agent workflows.

47 snips
Mar 6, 2026 • 29min
The agent wasteland, federated workflows, and a computer for computers
Listeners hear a deep dive into viral OpenClaw adoption and the security risks that follow. They explore Steve Yegge’s idea of federated “wastelands” for orchestrators, reputation ledgers, and wanted boards. The Perplexity Computer as a persistent digital coworker and the implications of AI making basic development extremely cheap are also discussed.


