
A dark side to LLMs. [Research Saturday]
CyberWire Daily
00:00
The Threats of Prompt Injection in Large Language Models
Sahar Abdonabi from SISPA Hemholt Center for Information Security. Research titled A Comprehensive Analysis of Novel Prompt Injection Threats to Application Integrated Large Language Models. There might be some new security vulnerabilities that we are really not noticing when we put these large language models in other applications, he says.
Play episode from 02:12
Transcript


