
“Historically is like six months ago”
8 snips
Apr 5, 2026 A lively chat about whether model context size changes agent behavior and project fit. They compare model roles and tradeoffs between cheaper, faster models and heavier ones that handle multi-file changes. They dig into why human review struggles with long project histories and propose agent roles, PR annotations, and layered AI systems to preserve domain knowledge.
AI Snips
Chapters
Books
Transcript
Episode notes
Agent Framed As Consulting Company Reduces Noise
- Avdi set up an agent (Claude) as a consulting company and himself as the client so the agent avoids low-level coding questions.
- This framing keeps the agent from asking many detailed coding questions and lets Avdi step away while it runs.
Choose Models By Role To Balance Cost And Capability
- Model choice can be role-specific: Avdi uses Opus as project lead and Sonnet as developer to balance capability and cost.
- Smaller models like Sonnet handle many tasks cheaply, while Opus steps in for higher-context leadership or complex decisions.
Bigger Context Can Make Larger Models Faster
- Larger models with bigger context windows (Opus) can sometimes be faster on complex multi-file changes because smaller models stall when context is full.
- Jessitron observes Sonnet wedge up on large integrated changes, then switching to Opus finishes the job quickly.


