
GOTO - The Brightest Minds in Tech A Common-Sense Guide to AI Engineering • Jay Wengrow & Kris Jenkins
10 snips
Apr 28, 2026 Jay Wengrow, software engineer, founder of Actualize and author of practical programming books. He explains how text LLMs become agents by emitting special notations that trigger real functions. They cover guardrails like regex and judge models, splitting work across specialized models, a 150-line podcast-generating agent, and why building from first principles can beat premature frameworks.
AI Snips
Chapters
Books
Transcript
Episode notes
Agents Work By Intercepting LLM Text To Trigger Code
- AI agents are a clever hack where code watches an LLM's text output for special notation and triggers real functions when detected.
- Jay Wengrow explains using a system prompt that instructs the LLM to emit an arbitrary marker like {{send email}} which the app scans for and then calls a deterministic send-email tool.
Combine Regex ML And Judge LLMs For Guardrails
- Implement guardrails that inspect LLM output and prevent undesirable text from reaching users.
- Use regex, specialized cheaper ML models, or a judge LLM to filter toxic or unsafe content, but beware judge LLMs increase latency and cost.
Split Complex Workflows Into Specialized LLM Roles
- Specialization comes from splitting complex tasks into subagents with focused system prompts rather than relying on one LLM to do everything.
- Jay's podcast example uses separate LLMs: one for web research and another tuned to craft a human-friendly transcript before text-to-speech.




