
Talk Python To Me #541: Monty - Python in Rust for AI
65 snips
Mar 19, 2026 Samuel Colvin, creator of Pydantic and now leading Pydantic AI and Monty, builds a Rust-based sandboxed Python runtime for running LLM-generated code. He discusses Monty’s microsecond startup, strict sandboxing with host-mediated access, serializing interpreter state for resumability, and tradeoffs around limited Python features and no pip installs to improve safety and performance.
AI Snips
Chapters
Transcript
Episode notes
Microsecond Sandboxed Python Tailored For LLM Code
- Monty is a purpose-built Python interpreter in Rust designed to run LLM-generated code with microsecond startup and built-in sandboxing.
- It serializes whole interpreter state to a database for durable tool calls and resumes, avoiding long-lived processes and container cold starts.
Use LLMs Plus Tests To Build Complex Runtimes Faster
- Leverage LLMs to accelerate building complex infra like interpreters by having them generate tests and implementation patterns, then iterate with fuzzing and unit checks.
- Use test-driven / fuzz-driven loops to validate behavior against CPython where determinism matters.
Require Explicit Host Callbacks For All External Access
- Monty forces every real-world interaction to be an explicit host callback, so file, env, and network access require explicit exposure by the host.
- That design enables strict controls, proxying (e.g., block localhost), and secure auditing before executing requests.

