Cloud Security Podcast

How Attackers Bypass AI Guardrails with Natural Language

Feb 10, 2026
Eduardo Redondo Garcia, Global Head of Cloud Security Architecture at Check Point with decades in security and AI fraud detection. He discusses how natural language becomes an attack vector, prompt injection and runtime defenses, risks from Shadow AI and third-party models, scaling social engineering with GenAI, and tackling deepfakes and biometric bypasses.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Natural Language Is The New Executable

  • Natural language prompts are now the primary attack surface for generative AI systems.
  • Eduardo says "natural language is your executable," which reframes security around intent rather than code.
INSIGHT

Multilingual Prompts Bypass English Guardrails

  • Attackers exploit multilingual prompts and untested languages to bypass guardrails.
  • Eduardo notes many guardrails were tested only in English, leaving non-English vectors vulnerable.
ADVICE

Split Effort: Shift Left And Runtime Monitor

  • Do invest in both AI governance and data governance before deploying models.
  • Eduardo recommends shifting left ~30% for hygiene and 70% runtime monitoring to catch unforeseen behaviors.
Get the Snipd Podcast app to discover more snips from this episode
Get the app