
The Infra Pod Building a successful infra product between all the AI apps and model providers (chat with Louis from OpenRouter)
Tim (Essence VC) and Ian (Keycard) interviewed Louis Vichy, co-founder of OpenRouter, about why he built OpenRouter to de-risk AI app development (end-user pays LLM costs), how it scaled to processing ~5–6T tokens/week, and what OpenRouter is today: a reliable inference routing/control layer across ~60 providers with consolidated billing and reduced vendor lock-in. Louis explains why teams adopt OpenRouter (constant new model integrations, pricing/billing, differing API shapes), how routing focuses on practical heuristics (fallbacks, cost, throughput, latency), and how reliability is achieved via provider failover (e.g., alternate endpoints like Vertex/Bedrock). They discuss agent trends (longer-running agents, small models for routing/classification with specialized downstream models), possible memory support, developer conveniences (e.g., PDF parsing), and enterprise features (security/compliance guardrails, presets). The episode ends with links to OpenRouter chat/rankings pages and hiring for high-agency TypeScript-focused engineers.00:00 Welcome & Meet Louis (OpenRouter Co‑Founder)00:27 Origin Story: De‑Risking AI App Costs (Hackathon Lessons)01:35 First Big Feature: End‑User Pays for Tokens (Sign in with OpenRouter)02:34 From Routing to Rankings: Scaling to Trillions of Tokens03:42 What OpenRouter Is Today: Reliable Inference Across 60+ Providers05:55 Why Teams Adopt It: Avoiding Model API Churn, Billing, and Vendor Lock‑In08:37 Winning Strategy: Don’t Build a “Magic Router”—Optimize Cost/Latency/Throughput18:58 From Chat to RAG + Memory: Building Persistent Agent Context20:37 Developer Bells & Whistles: Auto PDF Parsing and More21:11 Enterprise Readiness: Compliance, Security Guardrails & Model Presets22:22 Customer Growth at Warp Speed in the AI Era23:03 Spicy Future!
