Code Story: Insights from Startup Tech Leaders

The Gene Simmons of Data Protection - AI Inference-time Guardrails

Feb 11, 2026
Ave Gatton, Director of Generative AI at Protegrity and former atomic and optical physicist, discusses inference-time AI risks. She outlines prompt injection, data exfiltration, and agent misuse. She compares training vs inference threats. She walks through industry differences and practical secure-by-design guardrails for real-time protection.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Agents Have No Built-In Security Model

  • Agents bridge users and external systems, creating unique security risks when given data access and communication abilities.
  • Ave Gatton warns agents lack an internal security model and can be directed to exfiltrate or manipulate data if seized.
INSIGHT

Training Risks Versus Inference Risks

  • Training-time risks like model poisoning embed backdoors in model behavior via malicious fine-tuning data.
  • Inference-time risks are tactical and immediate, where prompts or interactions cause malicious actions in the moment.
ADVICE

Design For Inevitable Jailbreaks

  • Assume any deployed agent will be jailbroken at scale and design defenses accordingly.
  • Focus on limiting the worst-case actions an agent can take rather than assuming perfect immunity.
Get the Snipd Podcast app to discover more snips from this episode
Get the app