
AI Security Podcast How Lovable Manages 100+ Daily Changes, Vibe Coding & Shadow AI
39 snips
Apr 2, 2026 Igor Andriushchenko, Head of Security at Lovable and former DevOps/DevSecOps engineer, describes securing an AI-native platform amid explosive change. He explains how AI multiplies CI/CD churn. He covers PAM guardrails for agents, why allow/deny logic flips for agents, SCA risks from hallucinated packages, new AI-native code scanners, and practical crawl-walk-run controls for internal AI tooling.
AI Snips
Chapters
Transcript
Episode notes
AI Churn Is Breaking Traditional CI/CD
- AI-driven developer workflows massively increase change churn, outstripping traditional CI/CD capacity.
- Igor says developers can produce “100 times per day” more changes, which load-tests CI/CD and breaks old pipelines.
Start With Air Pockets For Safe AI Adoption
- Find controlled "air pockets" where AI can be used safely for prototyping without connecting sensitive data.
- Igor recommends starting with internal tooling or dashboards that don't touch production or PII to prove value.
Use PAM To Block Agent Access To Production Secrets
- Treat AI agents like human developers and enforce the same privilege controls.
- Igor uses PAM permits so agents cannot escalate to production secrets and human approval is required for sensitive actions.
