
The Rollup George Zeng on Why Your AI Agent Isn't Safe
8 snips
Mar 3, 2026 George Zeng, co-founder at NEAR and lead of Iron Claw, a Rust-built AI agent framework for secure tool access. He talks about rebuilding agents in Rust for memory safety. He covers sandboxed tool access, prompt-injection defenses, encrypted secrets, and a demo where an agent ordered $150 worth of pizza.
AI Snips
Chapters
Transcript
Episode notes
Rewriting Agents In Rust For Real Security
- Iron Claw was rebuilt in Rust to prioritize security over rapid prototyping.
- George Zeng cites memory safety, per-tool sandboxing, prompt injection protections, and encrypted secrets as the core architectural changes that make it more trustworthy than OpenClaw.
Iron Claw Born On A Night Feeding
- Iron Claw's origin story began when Ilya inspected OpenClaw code while feeding his baby and decided to rewrite it.
- He implemented the first Iron Claw version during night feeds to build a more secure agent framework.
Rogue Behavior Comes From Models And Permissions
- Agent 'going rogue' is often a model-level behavior, not just framework failure.
- George explains that poor model decisions plus granted permissions cause harmful actions, so frameworks must pair architectural security with model fixes.
