
The Data Exchange with Ben Lorica Securing the "YOLO" Era of AI Agents
12 snips
Feb 26, 2026 Jason Martin, Director of Adversarial Research at HiddenLayer, is an AI security researcher who analyzes agent threats. He explains why OpenClaw went viral, how its design and defaults enable risky autonomy, and demos prompt-injection and takeover techniques. He also covers internet-facing instances, agent botnet risks, and concrete mitigation ideas in short, punchy segments.
AI Snips
Chapters
Transcript
Episode notes
Viral Growth Produced Rapid Change And Many Forks
- OpenClaw exploded in popularity, hitting ~180,000 GitHub stars and 30,000 forks within weeks as people used it for automation and running on Mac minis.
- Rapid growth produced hundreds of commits (500 in a week) and a fast-evolving contributor base.
Enforce Access Controls Outside The Model
- Prevent models from making critical security decisions by enforcing software-level access controls rather than relying on the model to ask for permission.
- For example, make heartbeat.md and other executable instruction files non-writable or require explicit human confirmation enforced by the app, not the model.
Default Installs Exposed Many Public Instances
- Default insecure configurations and vibe-coded development led to many internet-facing OpenClaw instances, creating easy remote access and exposure to unknown vulnerabilities.
- Some defaults were patched quickly (one CVE fixed) but thousands remain internet reachable.
