

Chain of Thought | AI Agents, Infrastructure & Engineering
Conor Bronsdon
AI is reshaping infrastructure, strategy, and entire industries. Host Conor Bronsdon talks to the engineers, founders, and researchers building breakthrough AI systems about what it actually takes to ship AI in production, where the opportunities lie, and how leaders should think about the strategic bets ahead.
Chain of Thought translates technical depth into actionable insights for builders and decision-makers. New episodes bi-weekly.
Conor Bronsdon is an angel investor in AI and dev tools, Head of Technical Ecosystem at Modular, and previously led growth at AI startups Galileo and LinearB.
Chain of Thought translates technical depth into actionable insights for builders and decision-makers. New episodes bi-weekly.
Conor Bronsdon is an angel investor in AI and dev tools, Head of Technical Ecosystem at Modular, and previously led growth at AI startups Galileo and LinearB.
Episodes
Mentioned books

Apr 8, 2026 • 1h 18min
Why LLMs Are Plausibility Engines, Not Truth Engines | Dan Klein
Dan Klein, co-founder and CTO of Scaled Cognition and UC Berkeley CS professor known for NLP and conversational AI, explains why large language models are plausibility engines, not truth machines. He covers why prototypes fail to ship, the limits of prompting, APT1’s action-first architecture, metacognition to curb hallucinations, and why stacking models and benchmarks can give a false sense of reliability.

13 snips
Apr 2, 2026 • 59min
Agent Memory: The Last Battleground in the AI Stack | Richmond Alake
Richmond Alake, Oracle's AI DevEx lead and creator of the MemoRiz library, is a leading voice on agent memory. He describes why memory engineering deserves its own discipline. He demos a memory-aware financial agent that runs vector, graph, spatial, and relational search in one query. He explains controlled forgetting, four human memory types mapped to agents, and why databases beat files in production.

Mar 25, 2026 • 44min
Context Poisoning Is Killing Your AI Agents: How to Stop It
Michel Tricot, CEO and co-founder of Airbyte, a data integration leader building an agent engine and context store. He argues context poisoning—not models—is why agents fail. Live demos compare raw API calls to a context store, showing massive token and time savings. Discussion covers why RAG alone falls short, entity tracking across SaaS systems without embeddings, and the new role of context engineering.

19 snips
Mar 10, 2026 • 44min
I Started r/AI_Agents and Now I'm Launching a VC Fund
Yujian Tang, founder of Seattle Startup Summit and creator of r/AI_Agents, grew a huge AI community and now launching an AI-focused VC fund. He recounts building events and hackathons into deal flow. They cover the mechanics of starting a fund, sudden subreddit growth, valuation inflation in AI, and lessons from two failed startups.

22 snips
Mar 4, 2026 • 1h 2min
I Built an AI Coworker That Runs 90% of My Day
Sterling Chin, an Applied AI engineer and Senior Developer Advocate at Postman, built MARVIN, a personal AI assistant that automates developer workflows. He demos how MARVIN bookends his workday, converts meeting transcripts into Jira updates, and uses sub-agents, personality rules, and integrations to act like a junior colleague. He also explains onboarding, security, and why DIY assistants can beat big tech alternatives.

Feb 26, 2026 • 54min
How Intercom Cut $250K/Month by Ditching GPT for Qwen
Fergal Reid, Chief AI Officer at Intercom who built the Finn system and led their model strategy. He explains swapping GPT for a fine-tuned 14B Qwen to cut a $250K/month cost, the training stack from LoRA to full SFT on H200s, and how re-rankers, retrieval, and query canonicalization drove resolution gains. He also covers A/B testing, the latency paradox, and why vertically integrated AI wins long term.

Jan 21, 2026 • 50min
How Block Deployed AI Agents to 12,000 Employees in 8 Weeks w/ MCP | Angie Jones
Angie Jones, VP of Engineering for AI Tools and Enablement at Block and open-source maintainer, shares how her team rolled out Goose and MCP to 12,000 employees in 8 weeks. She discusses security guardrails, multi-model support, vibe coding stories (including a 2-hour build), MCP workflows connecting systems, and how non-engineers are building agentic tools.

Jan 14, 2026 • 51min
Gemini 3 & Robot Dogs: Inside Google DeepMind's AI Experiments | Paige Bailey
Google DeepMind is reshaping the AI landscape with an unprecedented wave of releases—from Gemini 3 to robotics and even data centers in space. Paige Bailey, AI Developer Relations Lead at Google DeepMind, joins us to break down the full Google AI ecosystem. From her unique journey as a geophysicist-turned-AI-leader who helped ship GitHub Copilot, to now running developer experience for DeepMind's entire platform, Paige offers an insider's view of how Google is thinking about the future of AI.The conversation covers the practical differences between Gemini 3 Pro and Flash, when to use the open-source Gemma models, and how tools like Anti-Gravity IDE, Jules, and Gemini CLI fit into developer workflows. Paige also demonstrates Space Math Academy—a gamified NASA curriculum she built using AI Studio, Colab, and Anti-Gravity—showing how modern AI tools enable rapid prototyping. The discussion then ventures into AI's physical frontier: robotics powered by Gemini on Raspberry Pi, Google's robotics trusted tester program, and the ambitious Project Suncatcher exploring data centers in space.00:00 Introduction01:30 Paige's Background & Connection to Modular02:29 Gemini Integration Across Google Products03:04 Jules, Gemini CLI & Anti-Gravity IDE Overview03:48 Gemini 3 Flash vs Pro: Live Demo & Pricing06:10 Choosing the Right Gemini Model09:42 Google's Hardware Advantage: TPUs & JAX10:16 TensorFlow History & Evolution to JAX11:45 NeurIPS 2025 & Google's Research Culture14:40 Google Brain to DeepMind: The Merger Story15:24 Palm II to Gemini: Scaling from 40 People18:42 Gemma Open Source Models20:46 Anti-Gravity IDE Deep Dive23:53 MCP Protocol & Chrome DevTools Integration26:57 Gemini CLI in Google Colab28:00 Image Generation & AI Studio Traffic Spikes28:46 Space Math Academy: Gamified NASA Curriculum31:31 Vibe Coding: Building with AI Studio & Anti-Gravity36:02 AI From Bits to Atoms: The Robotics Frontier36:40 Stanford Puppers: Gemini on Raspberry Pi Robots38:35 Google's Robotics Trusted Tester Program40:59 AI in Scientific Research & Automation42:25 Project Suncatcher: Data Centers in Space45:00 Sustainable AI Infrastructure47:14 Non-Dystopian Sci-Fi Futures47:48 Closing Thoughts & Resources- Connect with Paige on LinkedIn: https://www.linkedin.com/in/dynamicwebpaige/- Follow Paige on X: https://x.com/DynamicWebPaige- Paige's Website: https://webpaige.dev/- Google DeepMind: https://deepmind.google/- AI Studio: https://ai.google.devConnect with our host Conor Bronsdon:- Substack – https://conorbronsdon.substack.com/ - LinkedIn https://www.linkedin.com/in/conorbronsdon/Presented By: Galileo.aiDownload Galileo's Mastering Multi-Agent Systems for free here!: https://galileo.ai/mastering-multi-agent-systemsTopics Covered:- Gemini 3 Pro vs Flash comparison (pricing, speed, capabilities)- When to use Gemma open-source models- Anti-Gravity IDE, Jules, and Gemini CLI workflows- Google's TPU hardware advantage- History of TensorFlow, JAX, and Google Brain- Space Math Academy demo (gamified education)- AI-powered robotics (Stanford Puppers on Raspberry Pi)- Project Suncatcher (orbital data centers)

Dec 19, 2025 • 37min
Explaining Eval Engineering | Galileo's Vikram Chatterji
Vikram Chatterji, CEO and co-founder of Galileo who builds eval engineering and AI observability tools. He explains turning evaluations into scalable infrastructure, why generic evals plateau, and how continuous human-in-the-loop feedback and tuned SLMs drive reliable production AI. Short takes on runtime controls, multi-agent evaluation, and the rise of the eval engineer role.

Nov 26, 2025 • 59min
Debunking AI's Environmental Panic | Andy Masley
Andy Masley, Director of Effective Altruism DC and a former physics teacher, joins the discussion to debunk common myths surrounding AI's environmental impact. He reveals a staggering 4,500x error in a bestselling book regarding a data center's water usage. They explore how many AI water usage claims are misleading and emphasize that using AI tools has a minimal environmental footprint. Andy argues for focusing on systemic issues like data center efficiency and suggests that AI could ultimately help mitigate climate change.


