The MLSecOps Podcast

Breaking and Securing Real-World LLM Apps

Jul 16, 2025
Rico Komenda, an AI security specialist at Adesso SE, and Javan Rasokat from Sage share their expertise on securing LLM-integrated systems. They dive into prompt injection attacks, explaining their seriousness and potential risks. The duo discusses how vulnerabilities extend beyond models to data pipelines and APIs, highlighting the need for robust security measures. They also tackle the concept of AI firewalls and innovative strategies to enhance application security. Their insights on the evolving landscape of AI security are both timely and crucial.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

How Both Entered AI Security

  • Rico described his transition from AppSec into AI security after engaging with the MLSecOps community and talks.
  • Javan recounted building a Sage LLM co-pilot which sparked his practical interest in AI security.
ANECDOTE

Refund Demo Shows Model Risk

  • Rico shared a demo where a model overturned a refund decision, illustrating the need for checks on model autonomy.
  • He stressed that non-determinism and possible poisoned data mean models shouldn't make high-impact decisions unchecked.
ADVICE

Treat RAG Sources As Untrusted Inputs

  • Use RAG to inject external, relevant data into prompts but treat retrieval sources as untrusted and validate them before use.
  • Protect embeddings, vector stores, and retrieval pipelines because attackers can poison or manipulate external knowledge sources.
Get the Snipd Podcast app to discover more snips from this episode
Get the app