
go podcast() 076: From nginx to Caddy and we both had LLM quality issues/concerns
12 snips
Mar 12, 2026 They discuss worrying patterns in AI-generated code quality and the need for heavy refactors after LLM intervention. They debate perceived declines in model performance, benchmarking challenges, and risks of non-deterministic code. They compare Caddy and nginx, covering on-demand TLS, load-balancing features, DNS quirks, deployment strategies, and multi-provider tradeoffs.
AI Snips
Chapters
Transcript
Episode notes
LLMs Speed Up Startups But Create Technical Debt
- LLMs can speed early development but often produce messy code that requires heavy cleanup.
- Morten found generated SQL and access logic so poor he rewrote most of it before alpha release, costing real time.
Refactor After Letting Claude 'Cook' Go Code
- Dominique started a major Go refactor after letting an LLM generate backend code and realizing it required extensive rework.
- Claude produced a refactor plan but hit context limits, so Dominique followed the plan manually instead.
Perceived LLM Quality Often Drops Over Time
- Model quality can appear to decline after launch or feel inconsistent over time.
- Morten noted a site showing per-task performance trending down and suspects tuning and non-determinism affect usefulness.
