
Beyond The Pilot: Enterprise AI in Action LangChain: What OpenClaw Got Right (And Why Enterprises Can't Have It)
24 snips
Mar 4, 2026 Harrison Chase, Co-founder and CEO of LangChain, leads the team behind the LangChain library and agent frameworks. He explains why OpenClaw succeeded where AutoGPT did not. He breaks down Deep Agents: planning, to-do lists, subagents, file systems, and large system prompts. He outlines LangGraph, LangChain, and harness choices, and why context engineering matters for reliable agent loops.
AI Snips
Chapters
Transcript
Episode notes
Choose LangGraph LangChain Or Deep Agents Appropriately
- Use layered products: LangGraph for runtime and durable execution, LangChain for unopinionated abstractions, and Deep Agents for a batteries‑included harness.
- Harrison recommends using Deep Agents when you want opinionated context engineering and LangChain when you want flexibility.
Do Context Engineering For Every LLM Call
- Do context engineering: bring the right information in the right format to the LLM at the right time.
- Harrison advises treating tool definitions as part of the prompt and using file systems to let agents decide what to load into context.
File Systems Let Agents Self‑Manage Context
- File systems let agents manage huge tool outputs and their own context instead of bloating message history.
- Harrison notes modern harnesses dump 40,000-token API responses to files and let the agent read files only when needed.

