
The Stack Overflow Podcast AI-assisted coding needs more than vibes; it needs containers and sandboxes
30 snips
Mar 4, 2026 Mark Cavage, President and COO of Docker and seasoned cloud infrastructure leader, talks hardened containers, sandboxes, and Docker's approach to securing AI agents. He explains hardened base images, MicroVM-based sandboxes, saving mutated environments, and observability and controls for agent workflows. Conversation covers migration tools, scaling implications, and Docker’s roadmap for agent-focused features.
AI Snips
Chapters
Transcript
Episode notes
Agents Will Multiply Containers And Expose A Trust Gap
- Containers remain the default deployment artifact as AI agents generate far more code that must run safely in production.
- Mark Cavage notes companies will need many more containers as agents 10x lines of code, exposing a trust gap around running those artifacts.
Harden Containers By Minimizing Surface And Publishing An SBOM
- Minimize attack surface, require provenance, and publish an SBOM when hardening containers.
- Mark Cavage lists removing shells, tracking origin of components, continuous vulnerability monitoring, and clear SBOMs as core hardening steps.
Combine Automation With Human Compatibility Testing
- Automate base-image builds and vulnerability integration, but expect human work for compatibility testing and removing dev tools.
- Docker automates feeds and patch detection yet still runs full compatibility suites to ensure stripped images don't break apps.
