
Database School Building serverless vector search with Turbopuffer CEO, Simon Eskildsen
16 snips
Nov 13, 2025 Simon Eskildsen, Co-founder and CEO of TurboPuffer, shares insights from his experience as an infrastructure engineer at Shopify and the founding of TurboPuffer, a serverless vector search solution. He discusses the challenges of scaling databases during Shopify's hypergrowth and how that led to his innovative approach using object storage for efficient vector indexing. The conversation touches on TurboPuffer's impact on modern AI workloads and the challenges of existing vector databases, making a case for cost-effective solutions for emerging companies.
AI Snips
Chapters
Transcript
Episode notes
Cursor As A First Big Customer
- Cursor became an early adopter after struggling with vector costs and took a chance on TurboPuffer.
- Simon and Justine built a close partnership with Cursor and helped them tune infra and Postgres knowledge early on.
Why Object Storage Now Works
- Three cloud-era changes made object-storage-first databases practical: NVMe availability, S3 consistency, and S3 compare-and-swap.
- Those primitives remove the need for lateral consensus layers and enable stateless nodes with state in object storage.
Puffing: Cache Tiers And Latency Tradeoffs
- TurboPuffer uses a namespace-to-node hash, DRAM and NVMe caches, then range requests straight to S3 for exact data blocks.
- Cold S3 reads cost ~500–1000ms, NVMe ~100ms, and DRAM ~10ms, reflecting tiered latency trade-offs.

