
The BugBash Podcast Semmathesy and the Agentic Era: Learning Systems in 2026
Mar 18, 2026
Jessica Kerr, software developer and systems-thinker known for symmathesy, explains learning systems made of learning parts. They explore treating software as a teammate and how observability becomes the system's language. Conversation covers AI agents joining teams, hallucination risks, shaping agent behavior with context, and redefining legacy code as what agents cannot understand.
AI Snips
Chapters
Transcript
Episode notes
Agents Break Deterministic Feedback Loops
- Agents introduce non-deterministic actors that can hallucinate, altering traditional verifiable feedback loops.
- Kerr contrasts deterministic software (easy to debug) with agents that 'make some shit up' and require new handling.
Fix The Context Not The Agent
- When agents err, change the inputs: improve prompts, context, and tooling rather than blaming the model.
- Kerr practices rewinding (reverting code) and refactoring to make tasks easier for agents so they succeed next time.
Agents Learned To Commit
- Kerr recounts an agent interaction that refused to commit, then amended its own instructions and asked to commit.
- She contrasts that flabbergasting moment with another agent that later auto-committed after a settings tweak.

