
Improving Valkey with Madelyn Olson
8 snips
Feb 9, 2026 Madelyn Olson, Principal SDE at Amazon ElastiCache and MemoryDB and Valkey maintainer, explains Valkey’s origin as a Redis fork and its maintainer-driven governance. She discusses a complete hash table redesign, memory compaction and allocation changes, throughput-focused benchmarking, and why Valkey stays C while selectively using Rust for extensions.
AI Snips
Chapters
Transcript
Episode notes
Compact Allocations To Reduce Overhead
- Compacting many small allocations into larger contiguous blocks reduces pointer overhead and improves cache behavior.
- Valky borrowed ideas from high-performance cache systems like Pelikan/SecCache to pack data tightly.
Profile Throughput And Memory Stalls
- Measure throughput per core and memory-stall counters, not just latency, when profiling Valky-like systems.
- Use microbenchmarks, perf flame graphs, and CPU memory-wait counters to catch subtle regressions.
Benchmark Realistic Object Sizes
- Test across realistic value-size distributions (tens to hundreds of bytes and kilobytes) to detect regressions.
- Add performance tests for edge cases like prefetching behavior to prevent stealthy slowdowns.
