
AI Engineering Podcast From Blind Spots to Observability: Operationalizing LLM Apps with OpenLit
20 snips
Feb 15, 2026 Aman Agarwal, creator of OpenLit and builder of observability tooling for LLM apps, discusses operational foundations for running LLM-powered systems in production. He covers common blind spots like opaque model behavior, runaway token costs, and brittle prompt management. The conversation dives into OpenTelemetry-based tracing, prompt/version management, evaluation workflows, fleet instrumentation, and avoiding vendor lock-in.
AI Snips
Chapters
Transcript
Episode notes
From Music App Failures To OpenLit
- Aman built a music recommendation app and ran into debugging and runaway token cost issues.
- That experience motivated him to start OpenLit to improve AI development workflows.
Three Early Blind Spots For LLM Apps
- Major blind spots are opaque model behavior, unexpected token costs, and brittle prompt handling.
- Strong logging, observability, and prompt/version management are essential before shipping an MVP.
Avoid Vendor Lock-In With OTEL
- Prefer OpenTelemetry-compatible tooling to avoid vendor lock-in and ease migration.
- Prioritize maintainability and community standards when selecting LLM ops components.
