David Bombal

#562: Warning and demo: It's possible to Prompt Engineer Malware

Mar 23, 2026
Kieran Human, a security practitioner with ThreatLocker, demonstrates prompt-engineered malware techniques and live demos. He shows how LLM guardrails can be bypassed to generate PowerShell ransomware and data-stealing scripts. The conversation covers evading Defender, hiding malicious intent with comments, and testing risks for defenders.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

LLM Guardrails Can Be Circumvented With Stepwise Prompts

  • Large public LLMs can be coaxed into producing fully functional malware despite apparent guardrails.
  • Kieran used Copilot: asked for ransomware, was refused, then asked for a backup script and a follow-up deletion command and received the original ransomware script.
ANECDOTE

Local LLM Quickly Produced Clipboard Stealer And Exposed Data

  • Kieran hosted a completely local LLM that produced a clipboard-stealing PowerShell script in minutes on a laptop.
  • He tested scripts that copied clipboard contents then uploaded them to a Google bucket and once found bank info exposed on their bucket.
ADVICE

Limit User Scripting Privileges And File Access

  • Limit user-level scripting privileges and block PowerShell from broad file access to reduce data theft risk.
  • Kieran notes scripts run with user access can read documents unless policies prevent PowerShell from accessing files or the internet.
Get the Snipd Podcast app to discover more snips from this episode
Get the app