
The AI podcast for product teams 75% of Enterprise AI Fails. The Fix Isn't a Better Model.
Every influencer is drooling over Claude Code skills files. Every product team is chasing the next model release. But for two years, the data has been screaming the same thing: capability isn’t the bottleneck. Context is. This edition unpacks what that actually means — why structured business knowledge is the highest-leverage investment a product team can make, what the “context wars” look like from the inside, and why the teams winning aren’t the ones with the best models. They’re the ones whose AI actually understands their business.
What You’ll Learn in This Edition
This edition confronts the structural reason most AI products fail — they’re missing the context that makes capability useful.
* Why Juan Sequeda from ServiceNow says “hope is not a strategy” — and what to build instead of better prompts
* The three-layer knowledge framework that gives AI a shared language across your entire organization
* CNBC’s “silent failure at scale” investigation reveals why 91% of AI models degrade without anyone noticing
* Microsoft just adopted ontology — the same concept Juan has championed for 20 years — as the foundation of its agentic AI architecture
* Citadel Securities data shows software engineer job postings rising 11% YoY despite the displacement narrative
Episode 3: Context Is the New Moat — Why Your AI Needs Business Knowledge, Not Better Prompts
Every influencer is drooling over skills files and prompt templates. Juan Sequeda, Principal Scientist at data.world (acquired by ServiceNow), has spent 20 years proving that none of it works without structured business knowledge underneath. In this episode, Juan breaks down the three-layer framework — business metadata, technical metadata, and the mapping layer that creates real semantics — and explains why the teams investing in ontology today will compound value across every AI use case they build next. His blunt assessment of skills files as a production strategy: “Hope is an interesting strategy. It’s not one that I add to my strategy.”
“If you just edit in skills, I don’t think that’s gonna be the solution to your problem. You’ll have a great POC. It’ll work for the use cases you tested on. Are you willing to put your career on the line and put that in production?” — Juan Sequeda
Listen on Spotify | Apple Podcasts | YouTube
Context isn’t a nice-to-have. It’s the architecture layer that determines whether your AI product delivers consistent, measurable value or drifts into silent failure. PH1 built this framework to illustrate what Juan Sequeda has been researching for two decades: intent, background, examples, and templates aren’t prompt engineering tricks — they’re the structural foundation that transforms an AI system from a “forever intern” into a strategic partner. Without them, you’re hoping the model figures out what “order” means in your business. Hope, as Juan puts it, is not a strategy.
RAG Was the Answer. Now It’s a Symptom of the Real Problem.
RAG dominated for two years as the default way to give LLMs context. But as context windows expanded from 8K to a million tokens, the question shifted. This video breaks down when RAG still matters — vast, dynamic datasets and cost efficiency — and when long context windows make the retrieval layer unnecessary. The strategic implication for product teams: RAG was always a workaround for a deeper problem. The real question was never “how do I retrieve the right document?” It was “does my system actually understand my business?” That’s the context layer Juan Sequeda is building — and it sits beneath RAG, long context, and every other implementation detail.
In spite of the displacement signals, software engineer job postings are up 11% year over year. But read the fine print: a posting titled “Software Engineer” increasingly means “engineer who can operate LLMs in production” or “build RAG pipelines.” The title stayed the same — the job changed. If your team hasn’t redefined what “engineering” means in the context of AI-augmented workflows, you’re hiring for yesterday’s role.
Product Impact Resources
The pattern across these resources is consistent: the teams pulling ahead are the ones investing in context, knowledge, and governance infrastructure — not chasing the next model release. Capability is table stakes. The moat is how deeply your product understands the business it serves.
* Gartner predicts 80% of enterprises pursuing AI will use knowledge graphs by 2026 to enhance context and reasoning. The shift from “better prompts” to “structured knowledge” is no longer theoretical. The Role of Knowledge Graphs in Building Agentic AI Systems
* Microsoft adopted ontology as the foundation of its agentic AI architecture — Fabric IQ, Foundry IQ, and Work IQ create a semantic layer that gives agents shared business understanding. Microsoft Adopts Ontology-Based IQ Layer for Agentic AI
* Nathan Lasnoski argues that enterprise knowledge graphs are the foundation for moving from vibe coding to scalable agentic development — without semantic grounding, agents can’t reason across systems. Building an Enterprise Knowledge Graph for the SDLC
* HBR analysis reveals AI adoption stalls because of employee anxiety about relevance and identity — not technical limitations. The behavioral barriers are harder than the technical ones. Why AI Adoption Stalls, According to Industry Data
* WEF data shows organizations with strong governance and >5% IT budget allocated to AI see 70-75% positive outcomes vs. 50-55% without. Governance is infrastructure, not a bottleneck. Strong AI Governance Is a Business Advantage, Not a Bottleneck
* Deloitte’s agentic AI strategy report calls for governance and observability as first-class product features — agentic systems should expose provenance, tool-call traces, and policy decisions by default. Agentic AI Strategy
* Teresa Torres warns that AI without product discovery just means “shipping the wrong stuff faster.” The line lands directly on this edition’s thesis — capability without context is an accelerant of bad decisions, not good ones. Strong potential guest. Shipping the Wrong Stuff Faster
* Roger Wong unpacks Jenny Wen’s (Anthropic Head of Design) “ship fast, iterate publicly, build trust through speed” approach as a new design paradigm for AI products. Jenny Wen is a compelling guest lead given her role building Claude’s product experience. The Design Process Is Dead
* Meta’s alignment director had an OpenClaw agent start rapidly deleting her inbox — she thought it would confirm first. It didn’t. She ran to a Mac mini “like I was defusing a bomb.” Stuart Winter-Tear’s breakdown is a vivid, concrete case study of agentic AI failure in practice. Human in the Loop Is a Job
* Academic paper in Communications Psychology (Nature) argues that friction in AI design is a feature, not a bug — challenging the default “make it seamless” paradigm. Co-authors from U of T, Wharton, and Yale. Emily Zohar is a strong potential guest with a contrarian take that plays well on the podcast. Against Frictionless AI
Product Impact News
The news this edition reinforces a single uncomfortable truth: the biggest AI failures aren’t technical — they’re contextual. Systems that lack business knowledge don’t crash dramatically. They drift silently, producing outputs that look right but are wrong in ways no telemetry catches.
* CNBC investigated “silent failure at scale” — a beverage manufacturer’s AI ordered thousands of excess cans because it couldn’t contextualize new holiday labels. 91% of ML models degrade over time, and most enterprises never detect it. ‘Silent Failure at Scale’: The AI Risk That Can Tip the Business World Into Disorder
* Agentic AI’s dominant failure mode isn’t catastrophic breakdown — it’s silent drift. CIO reports that only 6% of organizations have fully deployed agents, and the Cloud Security Alliance now classifies cognitive degradation as systemic risk. Agentic AI Systems Don’t Fail Suddenly — They Drift Over Time
* Gartner predicts 40% of agentic AI projects will be scrapped by 2027. 90% of legacy agents fail within weeks. The primary driver is governance, not technology. Why 40% of Agentic AI Projects Will Fail
* Internal Microsoft data shows only 30% of Copilot enterprise licenses see weekly active usage after 6 months — despite unmatched distribution through Office. Workflow friction and unclear ROI are the barriers. Microsoft Copilot Adoption Stalls at 30% Active Usage
* Virtana surveyed 350+ senior IT leaders this month: 75% of enterprises report double-digit AI job failure rates, a third exceed 25%. Meanwhile, 59% of executives think they’re prepared — but 62% of practitioners report fragmented systems and visibility gaps. The disconnect is the risk. 75% of Enterprises Report Double-Digit AI Failure Rates
* Citadel Securities rebuts the AI displacement narrative with data: software engineer postings up 11% YoY. But job postings requiring AI literacy grew 70% YoY — the title stayed the same, the job changed. Software Engineer Job Postings Are ‘Rapidly Rising’
* Tech Mahindra and Microsoft launched an ontology-driven agentic AI platform for telecoms — the first major enterprise deployment built on Microsoft’s Fabric IQ semantic layer. The context wars are real. Tech Mahindra and Microsoft Launch Ontology-Driven Agentic AI Platform
Key takeaways
The throughline is unmistakable: the AI products failing at scale aren’t missing capability — they’re missing context. From CNBC’s investigation into silent failures to Microsoft betting its entire agentic architecture on ontology, the market is converging on what Juan Sequeda has been saying for 20 years: structured business knowledge is the highest-leverage investment you can make.
* Context is infrastructure, not a feature. Skills files and prompt templates are band-aids. The teams compounding value across AI use cases are the ones that defined “what does order mean?” before they shipped anything. If your AI can’t disambiguate your business terminology, it can’t deliver consistent results.
* Governance accelerates adoption. The WEF data is clear: organizations with strong AI governance see 20 percentage points higher positive outcomes. Governance isn’t the thing slowing you down — the absence of it is why 40% of agentic projects get scrapped.
* The job didn’t disappear — it transformed. Software engineer postings are up 11%, but the role now requires AI literacy. The same is true for product managers, designers, and strategists. The question isn’t whether AI will replace you. It’s whether you’ll invest in the context that makes AI actually useful.
Check Out Recent Episodes
Episode 2: Defensibility > Capability — Five Actions to Defend Your Product Value $73.6 billion went into GenAI startups in 2025, but 85% of AI startups will be out of business within three years. This episode tackles the economics of abundance and delivers five specific actions to redirect investment toward what actually survives: workflow depth, outcome visibility, and trust engineering. If you’re competing on features, you’re already exposed.
Episode 1: Why Your AI Metrics Are Lying to You The bullseye framework for AI products — Power, Speed, Impact, and Joy. Most teams are measuring Power and calling it success. This episode introduces a three-layer evaluation approach and shows why completion metrics hide the signals that actually matter for growth.
AI Strategy Jobs
* Staff Product Designer, AI Workflows — ServiceNow (Remote/Hybrid)
* AI Product Manager — ServiceNow (Remote)
* Product Designer, ChatGPT — OpenAI (San Francisco)
* Product Designer, Platform & Tools — OpenAI (San Francisco)
* AI Product Manager, Strategic Roadmap — IDC (Remote)
* Principal Product Manager, AI Personalization — Cedar (New York)
* Senior Product Designer, Generative AI — Coda (Remote)
* Product Designer, AI Agents — Simular (Palo Alto)
* Director, Product Design, AI Transformation — Element AI (Santa Clara, CA — On-site, 65% travel)
* Product Designer — Fidelity (Merrimack, NH / Jersey City, NJ / Westlake, TX — Hybrid)
If your AI product demos well but can’t prove it drives value in production, that’s a context problem — and it’s the gap PH1 closes. We help teams build the measurement and knowledge infrastructure that turns AI capability into measurable business impact. From defining what success means to proving it with data. ph1.ca
Thank you for supporting the Product Impact Podcast
Every episode goes deeper than the headlines to uncover what actually drives AI product success — and what’s quietly killing it. If Juan’s take on context and ontology challenged how you think about your AI product’s foundation, share this episode with your team. Follow the show so you never miss one. That’s how we grow this community of builders who refuse to settle for capability without impact.
Browse all episodes at productimpactpod.com — filter by topic to find the episode that fits what you’re working on right now. We’re at 56 episodes across the two seasons.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com
