Machines of Code and Grace

“Historically is like six months ago”

8 snips
Apr 5, 2026
A lively chat about whether model context size changes agent behavior and project fit. They compare model roles and tradeoffs between cheaper, faster models and heavier ones that handle multi-file changes. They dig into why human review struggles with long project histories and propose agent roles, PR annotations, and layered AI systems to preserve domain knowledge.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Agent Framed As Consulting Company Reduces Noise

  • Avdi set up an agent (Claude) as a consulting company and himself as the client so the agent avoids low-level coding questions.
  • This framing keeps the agent from asking many detailed coding questions and lets Avdi step away while it runs.
INSIGHT

Choose Models By Role To Balance Cost And Capability

  • Model choice can be role-specific: Avdi uses Opus as project lead and Sonnet as developer to balance capability and cost.
  • Smaller models like Sonnet handle many tasks cheaply, while Opus steps in for higher-context leadership or complex decisions.
INSIGHT

Bigger Context Can Make Larger Models Faster

  • Larger models with bigger context windows (Opus) can sometimes be faster on complex multi-file changes because smaller models stall when context is full.
  • Jessitron observes Sonnet wedge up on large integrated changes, then switching to Opus finishes the job quickly.
Get the Snipd Podcast app to discover more snips from this episode
Get the app