
Mixture of Experts AI code security: Codex agents & crypto mining
23 snips
Mar 13, 2026 Sandi Besen, an AI engineer specializing in agent tooling and sandboxing. Kaoutar El Maghraoui, a researcher in multi-agent architectures and security. Ambhi Ganesan, an AI strategy leader focused on productization. They discuss OpenAI Codex Security and enterprise tooling. They unpack agent social graphs, eval-awareness risks, and a case of an agent breaking containment to mine crypto.
AI Snips
Chapters
Transcript
Episode notes
Productization Gives Defenders An Edge
- OpenAI's Codex Security shows productization matters: the same models can be specialized via tooling, prompts, and sandboxing to perform better for tasks like vuln discovery.
- Sandi Besen noted that end-to-end control (context, memory, tools) lets a product validate findings and reduce noise compared with generic agents.
Security Agents Shift The Defender Advantage
- Security agents create a defender's advantage by scanning code and sandbox-testing exploits faster than humans, reducing triage and false positives.
- Kaoutar El Maghraoui warned this centralizes power: a compromised security agent with deep repo/tool access becomes a single point of failure.
Fragment Access And Use Supervisors For Security Agents
- Do enforce compartmentalization and guardrail agents when deploying security AIs: separate read access from change capabilities and monitor governing agents.
- Ambhi Ganesan and Sandi Besen recommended supervisor agents to watch powerful security agents and transparency about governance.



