
The AI podcast for product teams Your AI Strategy Is a Pile of Demos
Let’s stop pretending. Most AI strategies are just a collection of pilots that nobody had the courage to kill. The data this period is brutal: 95% of genAI pilots stall. Only 11% reach production in financial services. Microsoft — the biggest company in the world, with the best distribution on the planet — just reorganized Copilot because nobody internally could agree on what it was supposed to be. And while enterprises burn cycles debating governance frameworks, a new class of startups is quietly replacing entire job functions. Not assisting. Replacing. The gap between the people who get this and everyone else isn’t a skills gap. It’s a courage gap. This edition is about which side you’re on.
What You’ll Learn in This Edition
This edition confronts the uncomfortable reality that most AI investments are producing demos, not outcomes — and the structural reasons why.
* 🎙 Why agents are automating your thinking, not just your tasks — and why that distinction matters more than any model release
* ✍️ Copilot’s identity crisis is the most important product failure of 2026 so far
* 👉 The single variable that predicts AI maturity 7x better than technology choices
* 1️⃣ Why advertising AI use is now a financial liability for professional services firms
* 2️⃣ The inference cost crisis that threatens every AI business model — including OpenAI’s
Episode 4: The Era of Agents — Your Cognition Is the Product Now
We mapped three years of AI evolution in this episode and landed somewhere uncomfortable. Era one gave us wrong answers. Era two gave us wrong context. Era three — agents — is giving us wrong actions. And the stakes compound with each era because AI is no longer just saying things. It’s doing things.
Brittany brought the number that should haunt every product leader: only 6% of organizations have fully deployed any kind of agent. Copilot hit 30% weekly active usage after six months — meaning 70% of enterprise users basically stopped opening it. The tools are moving at an extraordinary pace. Almost nobody is keeping up.
We profiled four startups winning the point-solution war that most people haven’t heard of. But the real conversation was about what happens when you hand your thinking to an agent. Not your typing. Not your scheduling. Your thinking — the research, the monitoring, the analysis, the synthesis. Something changes in you when you do that. And most people haven’t reckoned with what that means.
“We’ve trained generations of people to think linearly. Step one, step two, step three, fill out this form, follow this process. Agents don’t work like that. Agents require you to think in terms of outcomes, connections, and context.” — Arpy
Listen now: Spotify | Apple Podcasts | YouTube
You’re invited to join the AI Strategy Experiments Zoom call today
Today (March 27) at 1pm ET we’re hosting a small group of strategists and builders and designers sharing their experiments and questions. Register here.
$490 billion in enterprise AI spending is delivering nothing. That’s not a technology failure. It’s a value creation failure. AI Value Acceleration exists to close that gap — diagnosing where AI value stalls and building playbooks that actually work. Value Assessment in 3 weeks. Value Amplification to go deep. Value Acceleration to prove what works. aivalueacceleration.com
Copilot Didn’t Fail. It Succeeded at Not Knowing What It Is.
Read that again. Internal confusion. Not external competition. Not technical limitations. The people building Copilot couldn’t agree on what it was for. Microsoft had everything a product could dream of — billions in funding, integration into every Office app, the largest enterprise distribution network on earth, and access to the most powerful models available. It didn’t matter. Without a clear product identity, all that distribution just delivered confusion at scale.
The uncomfortable truth: most AI products shipping today have the same disease. They’re a bundle of capabilities searching for a purpose. They demo beautifully. They onboard poorly. They get abandoned quietly. If the biggest company in the world can’t brute-force its way to product-market fit for an AI assistant, what makes you think your team can skip the hard work of defining what your AI product is actually for?
BCG: Why Usage Is Up but Impact Is Not
Employee-centric organizations are 7x more likely to be AI mature. Not 7% more likely. Seven times. Employee-centricity explains ~36% of variance in AI maturity outcomes. Model selection explains almost none of it.
Over 85% of organizations remain stuck at basic task assistance. Fewer than 10% have reached anything resembling semiautonomous collaboration. The teams pulling ahead didn’t start with better tools. They started with cultures where people felt safe to experiment, fail, and teach each other what they learned. HBR confirmed it separately: peer influence is the single most powerful predictor of AI adoption. When learning stays private, adoption stalls.
This is exactly why I built AI Value Acceleration — because the gap between what AI can do and what your organization is actually doing with it isn’t a technology gap. It’s a leadership gap. And closing it starts with measuring where value is being created, lost, and why.
Deloitte Put a Price Tag on Hallucinations. Then KPMG Made It Worse.
Deloitte issued a refund to the Australian government after errors in an AI-generated report. That’s a sentence that should terrify every professional services firm shipping AI-assisted work without rigorous review. But the follow-up is even more revealing: a competitor reportedly pushed KPMG to cut prices specifically because KPMG advertised AI use.
Think about that. Advertising AI didn’t increase perceived value. It decreased it. Clients heard “we use AI” and thought “then why am I paying you full price?” This is a new failure mode that nobody war-gamed: AI claims eroding the very premium they were supposed to justify. Every consulting firm, agency, and services company racing to slap “AI-powered” on their pitch decks needs to answer one question first — does your client believe they’re paying for AI’s work or yours? Because if it’s AI’s work, they’ll expect AI prices.
Product Impact Resources
Every resource this period points to the same conclusion: the companies pulling ahead aren’t chasing model releases. They’re building the structural layers — verification, governance, integration depth — that turn capability into production value. Everyone else is just running demos.
* The moat is the verification layer, not the model. Wolters Kluwer is grounding agents in proprietary knowledge graphs and allowing third-party queries via MCP for usage-based monetization. They’re not competing on intelligence. They’re competing on trust. This is the playbook for every company sitting on domain-specific data. Wolters Kluwer’s “System of Action” Strategy
* OpenAI’s real crisis isn’t competition. It’s unit economics. ~$5B loss on $3.7B revenue, with inference costs as the bottleneck. An IEEE-accepted paper highlights inference — not training — as the existential threat. Every company building on top of frontier models needs to understand: the model works. Serving it profitably doesn’t. The Inference Cost Crisis
* 70% of AI startups are wrappers. Investors are done pretending otherwise. Atoms AI Accelerator rejected 70% of applicants for lacking workflow depth or proprietary data. Google and Accel are doing the same. If your product is a chat interface over someone else’s model, you don’t have a company. You have a feature. Wrapper Rejection Is Now Institutional
* 95% of genAI pilots stall. The bottleneck is governance, not capability. Only 11% of pilots reach production in financial services. Integration complexity (58%), data gaps (47%), and unclear ROI (43%) outrank talent scarcity. The model isn’t the problem. The organization is. Why Pilots Die
* Non-human identities outnumber humans 82:1 in enterprises. That’s the attack surface for every production agent. 62% of practitioners cite security as the primary challenge. Until we solve agent authorization, most agentic AI stays in demo mode. The Authorization Gap
* Karpathy says 80% of his code is AI-written. The junior developers are paying the price. Entry-level engineering roles are shrinking. The PM role is evolving from translator to system architect. The skill that matters now is task decomposition and rigorous review of AI outputs — not writing code. If you’re not rethinking your hiring pipeline around this, you’re already behind. The Agentic Engineering Shift
* UX is the last moat — and most teams are cutting it. NNGroup found AI matches human UX work only 44% of the time. Trust is now the dominant design problem. The teams cutting UX researchers to fund AI engineers are creating the blind spots that will kill their products. Designers’ durable advantage lives in judgment and the “messy middle” — the part AI can’t touch. Why Designers Survive the Agent Era
Product Impact News
The headlines this period share a pattern: AI claims without evidence are becoming legally, financially, and organizationally dangerous. The era of “just say AI” is over.
* monday.com is getting sued for saying the word “AI” too confidently. They withdrew a $1.8B 2027 revenue target, triggering a 20.8% stock drop and a securities lawsuit alleging misleading AI investment statements. This is the new risk: AI-driven projections without verifiable metrics are now a securities liability. monday.com’s Legal Reckoning
* Crypto.com spent $70M on AI.com, then fired 12% of its workforce. The company framed cuts as eliminating roles that “do not adapt in our new world.” That’s not transformation. That’s using AI as cover for layoffs. When the narrative outruns the execution by this much, the credibility damage is permanent. AI-Washing Has Consequences
* GPT-5.2 is tiered now. Your CIO wants GPUs back on-premises. Three tiers — Instant, Thinking, Pro — plus MCP enterprise connectors. But the real story is CIOs pulling compute back in-house for data sovereignty, favoring open-weights models over cloud APIs. The frontier model race matters less when the enterprise won’t send its data to it. The Sovereignty Shift
* Cove AI built something promising. Microsoft swallowed it whole. The entire team was acquired and the product shut down. An AI collaboration platform — infinite whiteboard, AI-powered structured outputs — vanished into Copilot. If you’re building an AI startup adjacent to a platform company’s roadmap, this is your future. Platform Gravity
* Singapore just made governance a design requirement, not an afterthought. MAS released the AI Risk Management Handbook (Project MindForge), formalizing governance from design-time. Four pillars that integrate legal, ethical, and governance into AI products from inception. Every other jurisdiction is watching. Governance at Inception
* AI made your job harder, not easier. The data proves it. Post-AI adoption, email time rose 104% and chat time 145%. 14% report significant cognitive overload. Roles are becoming more complex, not simpler. And 66% of CEOs are freezing hiring while this happens. The promise was efficiency. The reality is intensity. The Work Intensification Problem
Key Takeaways
The uncomfortable pattern across every signal this period: the organizations failing with AI aren’t failing because the technology doesn’t work. They’re failing because they skipped the structural work that makes technology useful — clear product identity, governance readiness, cultural safety, and honest measurement. The ones succeeding did that work first.
* Your AI product’s biggest risk isn’t a competitor. It’s not knowing what it is. Copilot’s reorg is proof that distribution without identity produces abandonment at scale. Before you ship to everyone, answer the question Microsoft couldn’t: what is this product for, specifically, and how will someone’s week be different because of it?
* If you’re still chasing model upgrades, you’re optimizing the wrong layer. The decisive variables are governance (95% of pilots stall without it), culture (7x AI maturity for employee-centric orgs), and integration depth (verification beats capability). The model is the easiest part of the stack.
* The people pulling ahead aren’t smarter. They’re more honest about how they work. Agents demand a skill most professionals have never developed: describing what you actually do, clearly enough for a system to do it. That’s not a technology skill. It’s a self-awareness skill. And until you build it, every agent you deploy will amplify your worst habits instead of your best thinking.
Check Out Recent Episodes
Episode 3: Context Is the New Moat — Why Your AI Needs Business Knowledge — Juan Sequeda, Principal Researcher at ServiceNow, explains why RAG was always a workaround for a deeper problem: your AI doesn’t understand your business. The three-layer framework for semantic context that separates the teams compounding value from those still stuck in pilot purgatory.
Episode 2: Vibe Coding Changed Everything — Here’s What Comes Next — We sat down with Yoni Jozwiak, founder of Base44 ($80M revenue in 6 months), to unpack the defensibility crisis facing every AI startup. If anyone can build software by describing it, what’s actually defensible?
Episode 1: Why Your AI Metrics Are Lying to You — The framework for measuring AI product impact that most teams are getting wrong. Completion metrics hide signals that matter. Success ≠ satisfaction. The Power/Speed/Impact/Joy bullseye that changes how you evaluate everything.
AI Strategy Jobs
* Staff AI Product Designer, Mobile, GeminiApp — Google DeepMind (Walla Walla, WA — Hybrid)
* Lead AI Product Designer, IRIS — OVERJET (San Mateo, CA — Hybrid)
* Senior AI Product Manager — JPMorganChase (London, UK — On-site)
* AI Product Manager — Carrum Health (Chicago, IL — Remote)
* AI Product Manager — Nimber (Porto, Portugal — Remote)
* Senior AI Product Manager — Kaizen Gaming (Athens, Greece — Hybrid)
* Principal AI Product Manager — Eaton (Dublin, Ireland — Hybrid)
* VP AI Strategy — Prime Therapeutics (Atlanta, GA — Remote)
Your AI product demos well but can’t stick, scale, or justify cost? That gap between capability and value isn’t going to close itself. PH1 has spent 14 years helping product teams prove impact — from measuring what AI products actually deliver to improving the performance of LLM-powered experiences to defining AI vision that survives contact with real users. If the evidence in this edition makes you nervous about your own AI strategy, that’s the right reaction. Let’s talk about it.
Thank You for Supporting the Product Impact Podcast
This newsletter exists because you keep showing up, sharing what resonates, and pushing back when we get it wrong. That feedback loop is what makes this work. If this edition landed — forward it to someone who’s building with AI and needs to hear the parts nobody else is saying. And if you haven’t caught up on the full season, browse all episodes at productimpactpod.com.
Thanks for reading Product Impact | AI Strategy, Value Creation, AI UX! This post is public so feel free to share it.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com
