
Notes On Work - by Caleb Porzio They don' listen!
Feb 27, 2026
A breakdown of why AI often ignores instructions and how context limits and next-word prediction cause instruction loss. Practical tactics for getting reliable results: keep prompts tiny and single-purpose. Use deterministic checks like regex to enforce rules. Split work into isolated sessions and chained prompts to avoid bias and context rot.
AI Snips
Chapters
Transcript
AI Fails When Context Gets Too Big
- AI models often fail because they don't retain or follow complex contextual instructions consistently.
- Caleb Porzio compares AI to a toddler: if you give too many rules or distractions the model will ignore or forget them.
Split Tasks Into Single Purpose Prompts
- Break work into small, singular prompts so each LLM session has one clear task to increase reliability.
- Caleb demonstrates splitting a problem-statement generation and a solution-stripping step into separate sessions to avoid bias.
Add Deterministic Guards To Enforce Rules
- Introduce deterministic guards around risky decisions, e.g., run a regex to block npm and force pnpm when executing bash.
- Matt Pocock's Claude Code hook example enforces constraints outside the LLM's context file.
