Data Engineering Podcast

Prompt Management, Tracing, and Evals: The New Table Stakes for GenAI Ops

40 snips
Feb 15, 2026
Aman Agarwal, creator of OpenLit and AI engineering tools builder, talks about making LLM apps reliable and debuggable. He covers opaque model behavior, runaway token costs, and brittle prompt management. He explains OpenTelemetry-native observability, prompt/secret versioning, eval workflows, and integrations that turn black-box model runs into stepwise traces for production readiness.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

From Music App Failures To OpenLit

  • Aman Agarwal started OpenLit after building a music recommendation app and hitting debugging and token-cost problems.
  • Those early failures drove him to create tooling to make AI development workflows manageable.
INSIGHT

LLM Behavior Is A Debugging Black Hole

  • LLM behavior is a black box without detailed traces of prompts, context, and tool use.
  • Observable, stepwise tracing is essential to debug and understand model responses.
ADVICE

Avoid Vendor Lock-In With OpenTelemetry

  • Prefer OpenTelemetry-native systems to avoid vendor lock-in and ease migration between tools.
  • Choose platforms that output standard formats so traces remain portable and maintainable.
Get the Snipd Podcast app to discover more snips from this episode
Get the app