
This Week in AI How to Secure Your OpenClaw Agent | Interview with ZioSec Founders
16 snips
Feb 4, 2026 Andrius Useckas, CTO-level security expert who researches AI-agent exploits, and Aaron Walls, CEO focused on platform-driven penetration testing, break down OpenClaw risks. They describe real attack vectors like prompt injections, supply-chain and memory-based threats. They also discuss hosting tradeoffs, the “lethal trifecta” widening attack surfaces, and practical defenses such as sandboxes and input sanitization.
AI Snips
Chapters
Transcript
Episode notes
Capabilities Expand The Attack Surface
- Adding skills and data massively expands the model's attack surface.
- The core model's fallibility becomes the limiting security factor once you connect more capabilities.
Build Security Into The Framework
- Secure agent behavior at the framework level, not only via model rules.
- Design static code-level controls and permissions into tools to reduce jailbreak risks.
Prompt Injections Work Front And Back
- Prompt injections can occur both via exposed front-ends and through back-end skills or emails.
- Hidden content (HTML comments, encoded text) can carry malicious instructions unseen by humans.

