Chain of Thought | AI Agents, Infrastructure & Engineering

Conor Bronsdon
undefined
Mar 25, 2026 • 44min

Context Poisoning Is Killing Your AI Agents: How to Stop It

Michel Tricot, CEO and co-founder of Airbyte, a data integration leader building an agent engine and context store. He argues context poisoning—not models—is why agents fail. Live demos compare raw API calls to a context store, showing massive token and time savings. Discussion covers why RAG alone falls short, entity tracking across SaaS systems without embeddings, and the new role of context engineering.
undefined
18 snips
Mar 10, 2026 • 44min

I Started r/AI_Agents and Now I'm Launching a VC Fund

Yujian Tang, founder of Seattle Startup Summit and creator of r/AI_Agents, grew a huge AI community and now launching an AI-focused VC fund. He recounts building events and hackathons into deal flow. They cover the mechanics of starting a fund, sudden subreddit growth, valuation inflation in AI, and lessons from two failed startups.
undefined
21 snips
Mar 4, 2026 • 1h 2min

I Built an AI Coworker That Runs 90% of My Day

Sterling Chin, an Applied AI engineer and Senior Developer Advocate at Postman, built MARVIN, a personal AI assistant that automates developer workflows. He demos how MARVIN bookends his workday, converts meeting transcripts into Jira updates, and uses sub-agents, personality rules, and integrations to act like a junior colleague. He also explains onboarding, security, and why DIY assistants can beat big tech alternatives.
undefined
Feb 26, 2026 • 54min

How Intercom Cut $250K/Month by Ditching GPT for Qwen

Intercom was spending $250K/month on a single summarization task using GPT. Then they replaced it with a fine-tuned 14B parameter Qwen model and saved almost all of it. In this episode, Intercom's Chief AI Officer, Fergal Reid, walks through exactly how they made that call, where their approach has changed over time, and how all of their efforts built their Fin customer service agent. Fergal breaks down how Fin went from 30% to nearly 70% resolution rate and why most of those gains came from surrounding systems (custom re-rankers, retrieval models, query canonicalization), not the core frontier LLM. He explains why higher latency counterintuitively increases resolution rates, how they built a custom re-ranker that outperformed Cohere using ModernBERT, and why he believes vertically integrated AI products will win in the long term.If you're deciding between fine-tuning open-weight models and using frontier APIs in production, you won't find a more detailed decision process walkthrough.🔗 Connect with Fergal: Twitter/X: https://x.com/fergal_reidLinkedIn: https://www.linkedin.com/in/fergalreid/Fin: https://fin.ai/🔗 Connect with Conor:YouTube: https://www.youtube.com/@ConorBronsdonNewsletter: https://conorbronsdon.substack.com/Twitter/X: https://x.com/ConorBronsdonLinkedIn: https://www.linkedin.com/in/conorbronsdon/🔗 More episodes: https://chainofthought.showCHAPTERS0:00 Intro0:46 Why Intercom Completely Reversed Their Fine-Tuning Position8:00 The $250K/Month Summarization Task (Query Canonicalization)11:25 Training Infrastructure: H200s, LoRA to Full SFT, and GRPO14:09 Why Qwen Models Specifically Work for Production18:03 Goodhart's Law: When Benchmarks Lie19:47 A/B Testing AI in Production: Soft vs. Hard Resolutions25:09 The Latency Paradox: Why Slower Responses Get More Resolutions26:33 Why Per-Customer Prompt Branching Is Technical Debt28:51 Sponsor: Galileo29:36 Hiring Scientists, Not Just Engineers32:15 Context Engineering: Intercom's Full RAG Pipeline35:35 Customer Agent, Voice, and What's Next for Fin39:30 Vertical Integration: Can App Companies Outrun the Labs?47:45 When Engineers Laughed at Claude Code52:23 Closing ThoughtsTAGSFergal Reid, Intercom, Fin AI agent, open-weight models, Qwen models, fine-tuning LLMs, post-training, RAG pipeline, customer service AI, GRPO reinforcement learning, A/B testing AI, Claude Code, vertical AI integration, inference cost optimization, context engineering, AI agents, ModernBERT reranker, scaling AI teams, Conor Bronsdon, Chain of Thought
undefined
Jan 21, 2026 • 50min

How Block Deployed AI Agents to 12,000 Employees in 8 Weeks w/ MCP | Angie Jones

How do you deploy AI agents to 12,000 employees in just 8 weeks? How do you do it safely? Angie Jones, VP of Engineering for AI Tools and Enablement at Block, joins the show to share exactly how her team pulled it off.Block (the company behind Square and Cash App) became an early adopter of Model Context Protocol (MCP) and built Goose, their open-source AI agent that's now a reference implementation for the Agentic AI Foundation. Angie shares the challenges they faced, the security guardrails they built, and why letting employees choose their own models was critical to adoption.We also dive into vibe coding (including Angie's experience watching Jack Dorsey vibe code a feature in 2 hours), how non-engineers are building their own tools, and what MCP unlocks when you connect multiple systems together.Chapters:00:00 Introduction02:02 How Block deployed AI agents to 12,000 employees05:04 Challenges with MCP adoption and security at scale07:10 Why Block supports multiple AI models (Claude, GPT, Gemini)08:40 Open source models and local LLM usage09:58 Measuring velocity gains across the organization10:49 Vibe coding: Benefits, risks & Jack Dorsey's 2-hour feature build13:46 Block's contributions to the MCP protocol14:38 MCP in action: Incident management + GitHub workflow demo15:52 Addressing MCP criticism and security concerns18:41 The Agentic AI Foundation announcement (Block, Anthropic, OpenAI, Google, Microsoft)21:46 AI democratization: Non-engineers building MCP servers24:11 How to get started with MCP and prompting tips25:42 Security guardrails for enterprise AI deployment29:25 Tool annotations and human-in-the-loop controls30:22 OAuth and authentication in Goose32:11 Use cases: Engineering, data analysis, fraud detection35:22 Goose in Slack: Bug detection and PR creation in 5 minutes38:05 Goose vs Claude Code: Open source, model-agnostic philosophy38:17 Live Demo: Council of Minds MCP server (9-persona debate)45:52 What's next for Goose: IDE support, ACP, and the $100K contributor grant47:57 Where to get started with GooseConnect with Angie on LinkedIn: https://www.linkedin.com/in/angiejones/Angie's Website: https://angiejones.tech/Follow Angie on X: https://x.com/techgirl1908Goose GitHub: https://github.com/block/gooseConnect with Conor on LinkedIn: https://www.linkedin.com/in/conorbronsdon/Follow Conor on X: https://x.com/conorbronsdonModular: https://www.modular.com/Presented By: Galileo AIDownload Galileo's Mastering Multi-Agent Systems for free here: https://galileo.ai/mastering-multi-agent-systemsTopics Covered:- How Block deployed Goose to all 12,000 employees- Building enterprise security guardrails for AI agents- Model Context Protocol (MCP) deep dive- Vibe coding benefits and risks- The Agentic AI Foundation (Block, Anthropic, OpenAI, Google, Microsoft, AWS)- MCP sampling and the Council of Minds demo- OAuth authentication for MCP servers- Goose vs Claude Code and other AI coding tools- Non-engineers building AI tools- Fraud detection with AI agents- Goose in Slack for real-time bug fixing
undefined
Jan 14, 2026 • 51min

Gemini 3 & Robot Dogs: Inside Google DeepMind's AI Experiments | Paige Bailey

Google DeepMind is reshaping the AI landscape with an unprecedented wave of releases—from Gemini 3 to robotics and even data centers in space. Paige Bailey, AI Developer Relations Lead at Google DeepMind, joins us to break down the full Google AI ecosystem. From her unique journey as a geophysicist-turned-AI-leader who helped ship GitHub Copilot, to now running developer experience for DeepMind's entire platform, Paige offers an insider's view of how Google is thinking about the future of AI.The conversation covers the practical differences between Gemini 3 Pro and Flash, when to use the open-source Gemma models, and how tools like Anti-Gravity IDE, Jules, and Gemini CLI fit into developer workflows. Paige also demonstrates Space Math Academy—a gamified NASA curriculum she built using AI Studio, Colab, and Anti-Gravity—showing how modern AI tools enable rapid prototyping. The discussion then ventures into AI's physical frontier: robotics powered by Gemini on Raspberry Pi, Google's robotics trusted tester program, and the ambitious Project Suncatcher exploring data centers in space.00:00 Introduction01:30 Paige's Background & Connection to Modular02:29 Gemini Integration Across Google Products03:04 Jules, Gemini CLI & Anti-Gravity IDE Overview03:48 Gemini 3 Flash vs Pro: Live Demo & Pricing06:10 Choosing the Right Gemini Model09:42 Google's Hardware Advantage: TPUs & JAX10:16 TensorFlow History & Evolution to JAX11:45 NeurIPS 2025 & Google's Research Culture14:40 Google Brain to DeepMind: The Merger Story15:24 Palm II to Gemini: Scaling from 40 People18:42 Gemma Open Source Models20:46 Anti-Gravity IDE Deep Dive23:53 MCP Protocol & Chrome DevTools Integration26:57 Gemini CLI in Google Colab28:00 Image Generation & AI Studio Traffic Spikes28:46 Space Math Academy: Gamified NASA Curriculum31:31 Vibe Coding: Building with AI Studio & Anti-Gravity36:02 AI From Bits to Atoms: The Robotics Frontier36:40 Stanford Puppers: Gemini on Raspberry Pi Robots38:35 Google's Robotics Trusted Tester Program40:59 AI in Scientific Research & Automation42:25 Project Suncatcher: Data Centers in Space45:00 Sustainable AI Infrastructure47:14 Non-Dystopian Sci-Fi Futures47:48 Closing Thoughts & Resources- Connect with Paige on LinkedIn: https://www.linkedin.com/in/dynamicwebpaige/- Follow Paige on X: https://x.com/DynamicWebPaige- Paige's Website: https://webpaige.dev/- Google DeepMind: https://deepmind.google/- AI Studio: https://ai.google.devConnect with our host Conor Bronsdon:- Substack – https://conorbronsdon.substack.com/ - LinkedIn https://www.linkedin.com/in/conorbronsdon/Presented By: Galileo.aiDownload Galileo's Mastering Multi-Agent Systems for free here!: https://galileo.ai/mastering-multi-agent-systemsTopics Covered:- Gemini 3 Pro vs Flash comparison (pricing, speed, capabilities)- When to use Gemma open-source models- Anti-Gravity IDE, Jules, and Gemini CLI workflows- Google's TPU hardware advantage- History of TensorFlow, JAX, and Google Brain- Space Math Academy demo (gamified education)- AI-powered robotics (Stanford Puppers on Raspberry Pi)- Project Suncatcher (orbital data centers)
undefined
Dec 19, 2025 • 37min

Explaining Eval Engineering | Galileo's Vikram Chatterji

You've heard of evaluations—but eval engineering is the difference between AI that ships and AI that's stuck in prototype.Most teams still treat evals like unit tests: write them once, check a box, move on. But when you're deploying agents that make real decisions, touch real customers, and cost real money, those one-time tests don't cut it. The companies actually shipping production AI at scale have figured out something different—they've turned evaluations into infrastructure, into IP, into the layer where domain expertise becomes executable governance.Vikram Chatterji, CEO and Co-founder of Galileo, returns to Chain of Thought to break down eval engineering: what it is, why it's becoming a dedicated discipline, and what it takes to actually make it work. Vikram shares why generic evals are plateauing, how continuous learning loops drive accuracy, and why he predicts "eval engineer" will become as common a role as "prompt engineer" once was.In this conversation, Conor and Vikram explore:Why treating evals as infrastructure—not checkboxes—separates production AI from prototypesThe plateau problem: why generic LLM-as-a-judge metrics can't break 90% accuracyHow continuous human feedback loops improve eval precision over timeThe emerging "eval engineer" role and what the job actually looks likeWhy 60-70% of AI engineers' time is already spent on evalsWhat multi-agent systems mean for the future of evaluationVikram's framework for baking trust AND control into agentic applicationsPlus: Conor shares news about his move to Modular and what it means for Chain of Thought going forward.Chapters:00:00 – Introduction: Why Evals Are Becoming IP01:37 – What Is Eval Engineering?04:24 – The Eval Engineering Course for Developers05:24 – Generic Evals Are Plateauing08:21 – Continuous Learning and Human Feedback11:01 – Human Feedback Loops and Eval Calibration13:37 – The Emerging Eval Engineer Role16:15 – What Production AI Teams Actually Spend Time On18:52 – Customer Impact and Lessons Learned24:28 – Multi-Agent Systems and the Future of Evals30:27 – MCP, A2A Protocols, and Agent Authentication33:23 – The Eval Engineer Role: Product-Minded + Technical34:53 – Final Thoughts: Trust, Control, and What's NextConnect with Conor Bronsdon:Substack – https://conorbronsdon.substack.com/LinkedIn – https://www.linkedin.com/in/conorbronsdon/X (Twitter) – https://x.com/ConorBronsdonLearn more about Eval Engineering:⁠https://galileo.ai/evalengineering⁠Connect with Vikram Chatterji:LinkedIn – ⁠https://www.linkedin.com/in/vikram-chatterji/⁠
undefined
Nov 26, 2025 • 59min

Debunking AI's Environmental Panic | Andy Masley

Andy Masley, Director of Effective Altruism DC and a former physics teacher, joins the discussion to debunk common myths surrounding AI's environmental impact. He reveals a staggering 4,500x error in a bestselling book regarding a data center's water usage. They explore how many AI water usage claims are misleading and emphasize that using AI tools has a minimal environmental footprint. Andy argues for focusing on systemic issues like data center efficiency and suggests that AI could ultimately help mitigate climate change.
undefined
20 snips
Nov 19, 2025 • 1h 18min

The Critical Infrastructure Behind the AI Boom | Cisco CPO Jeetu Patel

Jeetu Patel, President and Chief Product Officer at Cisco, shares insights on the critical infrastructure needed for AI's rapid growth. He discusses three major constraints: infrastructure limits, trust issues from non-deterministic models, and a data gap. Jeetu highlights Cisco's approach to building secure AI factories and their collaborations with major partners like NVIDIA. He also emphasizes why enterprises may soon utilize thousands of specialized models and the importance of high-trust teams. Join him for a deep dive into the future of AI infrastructure!
undefined
Nov 12, 2025 • 53min

Beyond Transformers: How Liquid AI Is Rethinking LLM Architecture | Maxime Labonne

Maxime Labonne, Head of Post-Training at Liquid AI and creator of a popular LLM course, dives into the future of AI architectures. He reveals how Liquid AI’s hybrid model merges transformers with convolutional layers for efficiency on edge devices. Maxime discusses the pivotal role of post-training in maximizing AI capabilities and the use of synthetic data. He shares insights on small on-device models, creative applications, and the challenges of function calling—making complex AI evolution both relatable and accessible.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app