
Threat Vector by Palo Alto Networks Securing the AI Supply Chain
Jan 8, 2026
Ian Swanson, AI security leader and founder who built Protect AI and led AI/ML teams at AWS and Oracle, discusses securing the AI supply chain. He covers ML SecOps, hidden model sprawl and visibility gaps, malicious models and runtime inference attacks. He also talks about vibe coding risks, developer guardrails, discovery of shadow AI, and continuous red teaming to manage AI risk.
AI Snips
Chapters
Transcript
Episode notes
Models Are The Overlooked Attack Surface
- Models are the engine of AI and often the overlooked attack surface.
- Ian observed tens of thousands of models live in enterprises, many pulled from open repositories like Hugging Face with hidden risks like neural backdoors.
Test And Red Team Models Before Production
- Do test, benchmark, evaluate, and red team models and AI applications before production.
- Ian urges continuous testing at inference to find threats like credential exfiltration embedded in model artifacts.
Name Squatting Model Tried To Steal Cloud Credentials
- Ian found a name‑squatting model pretending to be from a known healthcare company.
- That malicious model was downloaded tens of thousands of times and attempted to steal cloud credentials at deserialization.
