
Day Two DevOps D2DO295: Risks and Benefits of Putting AI in Production
Mar 4, 2026
Rich Mogull, Chief Analyst at the Cloud Security Alliance and cloud security expert, walks through putting AI into production and its operational and risk trade-offs. He covers AI-caused outages, how coding agents shift developer responsibility, non-determinism and prompt risks, using AppSec pipelines to catch AI regressions, and defense strategies like isolation, segmentation, and zero trust.
AI Snips
Chapters
Transcript
Episode notes
Coding Agent Caused A Risky Push
- Rich Mogull recounts a developer using AWS Cloud's Kiro IDE and an AI coding agent to push code that caused a service outage.
- He stresses the human was ultimately responsible but the agent enabled a fast, risky push that bypassed expected checks.
LLMs Behave Like Human Brains
- Generative AI is non-deterministic like a human brain, so identical prompts can produce different outputs due to varied internal pathways and context limits.
- This explains why AI can introduce regressions and inconsistent code behavior tied to context windows and model variability.
Never Let AI Push Directly To Prod
- Do keep CI/CD gates and AppSec controls when using AI coding agents and avoid automating direct pushes to production.
- Use static analysis, SCA, test-driven pipelines and security personas to scan AI-generated code before deployment.

