Code RED

#23 - Engineering Intelligence: How to Build LLM Applications at Scale with Marc Klingen of Langfuse

11 snips
Apr 17, 2025
Marc Klingen, CEO of Langfuse, shares insights on the journey from startup to a leading open-source LLM engineering platform. He discusses the pivotal challenges of building LLM applications, including the importance of monitoring key performance indicators. The conversation dives into tracing user interactions within AI applications and emphasizes collaboration among diverse roles in AI engineering. Klingen also highlights how observability tools can enhance project success and the benefits of open-source software in fostering community-driven innovation.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Traces Tailored for LLM Apps

  • LLM application tracing focuses on recording each step, including security checks, retrievals, and multiple LLM calls.
  • This structured tracing enables detailed evaluation beyond status codes, crucial for debugging and improving such apps.
INSIGHT

Tracing Conversations and Calls

  • Langfuse treats each individual LLM call as a trace and groups calls into sessions shown as chat conversations.
  • This structure helps technical and business users easily explore interactions and debug at multiple levels.
INSIGHT

Cross-functional AI Engineering Teams

  • AI engineering teams are cross-functional involving software engineers, ML researchers, PMs, domain experts, and executives.
  • Software engineers bring pragmatism and scaling skills, while ML roles focus on iterative learning and experimentation.
Get the Snipd Podcast app to discover more snips from this episode
Get the app