
Changelog & Friends Kaizen! Let it crash
Jan 17, 2026
Gerhard Lazu, a Kaizen expert and engineer specializing in systems and infrastructure, returns to analyze out-of-memory errors and discuss the innovative Pipedream instance status checker. He dives into the intriguing 'let it crash' philosophy, explaining how it ties to Erlang's design principles. Gerhard also examines abnormal download trends, notably a single episode being repeatedly accessed from Asia, and proposes solutions for managing those traffic spikes. His insights on Varnish behavior and LLM analysis reveal valuable strategies for performance optimization.
AI Snips
Chapters
Books
Transcript
Episode notes
Prompt LLMs By Stakeholder
- Feed Varnish stats into an LLM only after framing audience and questions; pre-prompt for different stakeholders.
- Use multiple models to cross-check analysis and catch discrepancies.
Disk Allocator Failures Caused Major Bottleneck
- The file store was ~97% full with ~48GB used and disk allocation failures spiking, causing Varnish to fall back to RAM or fail caching large objects.
- The instance still managed a 93% hit ratio and served terabytes while exposing storage as the critical bottleneck.
Use Requests Concurrency And Idle Timeouts
- Use 'requests' concurrency (not 'connections') for HTTP apps to avoid long-lived connection exhaustion at the proxy.
- Set idle timeouts and run periodic region checks to detect and prevent hung connections.


