David Bombal cover image

#563: Securing LLMs and fighting Prompt Injection with Algorithmic Red Teaming

David Bombal

00:00

Risks of running external models and MCP

David asks about public models; Rick warns of poisoned models, prompt injection, and unsecured Model Context Protocols.

Play episode from 19:51
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app