

The Chief AI Officer Show
Front Lines
The Chief AI Officer Show bridges the gap between enterprise buyers and AI innovators. Through candid conversations with leading Chief AI Officers and startup founders, we unpack the real stories behind AI deployment and sales. Get practical insights from those pioneering AI adoption and building tomorrow’s breakthrough solutions.
Episodes
Mentioned books

Apr 9, 2026 • 48min
Why AI won't save media without fixing the infrastructure underneath
What happens when a journalist turned Amazon product manager becomes the Chief AI Officer of one of the world's largest international broadcasters? You get someone who sees the AI threat to media not just as a distribution problem, but as a full production chain crisis that requires a fundamentally different organizational architecture.Marie Kilg, Chief AI Officer at Deutsche Welle, makes the case that legacy media's survival depends on something most AI transformation conversations ignore: data interoperability across systems that were never designed to talk to each other. With 32 languages, siloed editorial teams, and decades of layered organizational structure, Deutsche Welle's path to an AI-powered content flywheel starts at the infrastructure layer, not the model layer.Topics Discussed:Why AI threatens the full media production chain, not just distributionThe flywheel model: feeding audience data back into editorial decisionsData interoperability as the core prerequisite for AI at scale in mediaWhy "push a button and AI does it" expectations are damaging real implementationHow metadata automation surfaces hidden infrastructure debtOrganizational change mechanisms vs. culture change in large public broadcastersTech companies underestimating journalism as a discipline

Mar 26, 2026 • 45min
AI Won't Break Your Security Program. Your Gaps Will.
Most security leaders treat AI as a new threat category requiring new defenses. Rohit Parchuri, SVP and Chief Information Security Officer at Yext, pushes back hard on that. His argument: if your foundational controls are solid, AI does not require you to rebuild anything. What it does is amplify whatever you already have, gaps included, which makes the real question not "what new controls do we need?" but "how well are we actually executing on what we already built?"Rohit walks host Ben Gibert through how Yext is operationalizing this at scale: threat-modeling AI as just another system with inputs, processing, and outputs; building AI security testing directly into the existing CI/CD pipeline rather than standing it up as a separate track; investing heavily in data classification and taxonomy to solve DLP before deploying any AI tool internally; and establishing an AI Excellence Committee with cross-functional representation to run a single governance funnel across every AI request in the company. He also makes the case that the CISO who earns a seat at the AI strategy table is the one who deeply understands the business value chain, not just the threat landscape.Topics discussed:Threat-modeling AI as a system instead of a threat categoryWhy existing security controls are sufficient for AI todayIntegrating AI security testing into CI/CD without adding process overheadData classification and taxonomy as prerequisites for safe internal AI adoptionUsing an AI Bill of Materials as a transparency mechanismHow Yext's AI Excellence Committee runs a single governance funnelBuild vs. buy decision-making for AI security toolingWhat separates strategic CISOs from tactical operators in the age of AIThe CISO's role in enabling AI adoption rather than blocking it

Mar 12, 2026 • 42min
Building AI agents that fix production incidents before engineers wake up
Diamond Bishop has spent 15 years building AI systems at Microsoft (Cortana), Amazon (Alexa), and Facebook (PyTorch) before founding an AI DevOps startup that Datadog acquired. Now running Datadog's AI Skunk Works, a deliberately small interdisciplinary team modeled on Lockheed's original, he's focused on a question most enterprise AI teams aren't asking yet: what does your product look like if humans are no longer the primary customer?That question drives everything from Bits AI, their production SRE and security agent, to a set of longer-range bets organized around three pillars: personalized agent learning, enterprise agent infrastructure, and eval. Diamond breaks down how he structures each one, why the demo-to-production gap comes down to data and eval rather than model capability, and where the real unsolved problems in agent development still sit.Topics discussed:Bits AI's capabilities in production across SRE incident response, security analysis and code generationThree-pillar agent development framework: personalized learning, enterprise infrastructure and evalLoRA-style adapter architecture for layering custom per-user agents on top of first-party agentsWhy SRE agent startups without proprietary observability data face a structural disadvantage at production scaleService graph and entity relationship context as a structured alternative to RAG for DevOps agentsSkunk Works team design: staying small and interdisciplinary to move like a startup inside a public companyThe shift from human-operated cloud services to ambient AI-native services built to run with fewer humans over timeCrawl-walk-run path for enterprise agent adoption: from LangGraph-based Python agents to continuously learning systemsWhy concentrating AI research investment in transformer scaling creates long-term architectural riskBuilding agent-native tooling rather than repurposing interfaces designed for humans

Feb 26, 2026 • 47min
How Xoriant ties compensation to AI metrics: The revenue, margin, and brand multiple framework
Most enterprise AI initiatives die in pilot purgatory because organizations chase peripheral use cases instead of embedding AI into core business processes. Vineet Moroney, Chief Transformation Officer at Xoriant, a 6,000-person engineering services firm, has built a measurement system that eliminates this problem: tie AI directly to three financial metrics (revenue, margin, brand multiple) and make 50% of performance bonuses dependent on them.His framework separates AI revenue into two categories: "with AI" (AI-led service transformation like platform modernization) and "for AI" (building AI capabilities on customer platforms). AI margin captures efficiency gains from tool usage that improve project delivery economics. AI multiple quantifies brand value and downstream revenue from innovative deployments. This structure forces teams to distinguish between projects that matter and expensive experiments.When Xoriant's CFO wanted to reduce Days Sales Outstanding, Vineet built an invoice payment prediction model at 87% accuracy that eliminated a five-person AR team and cut DSO by two days. The solution required no expensive models, just strategic business case selection. For manufacturing clients, he's deploying edge AI on legacy sensor infrastructure for predictive maintenance without sensor replacement, creating new service revenue streams from installed equipment bases.Topics discussed:Three-part AI revenue model distinguishing "with AI" service transformation from "for AI" capability building on customer platformsCompensation structure allocating 50% of performance bonuses across AI revenue generation, margin improvement, and brand multipleThe EXB framework quantifying AI returns through efficiency gains, experience improvements via customer lifetime value, and business impact from downstream revenueTwo-week POC to 90-day production methodology with AI assurance testing protocols for non-deterministic system validationFive prerequisite elements for POC survival: strategic alignment, C-suite sponsorship, urgent business need, allocated budget, and core process focusEdge AI monetization on legacy sensor infrastructure for predictive maintenance and service offering creation without hardware replacementInvoice payment prediction at 87% accuracy reducing five-person AR teams to single-person operations while cutting DSO by two daysWhy golden dataset POCs fail at scale due to latency, inconsistency, and infrastructure readiness gapsSales approach for skeptical executives: lead with customer pain points, prove with similar completed work, commit to rapid production timelinesMiddle management resistance as the primary adoption barrier despite CEO enthusiasm and junior staff willingness to adopt AI tools

Feb 12, 2026 • 44min
The infrastructure mistake that kills AI pilots: Why sandboxes can't reach enterprise data centers
Lenovo cut parts planning from six hours to 90 seconds by treating infrastructure architecture as a first-class constraint, not an afterthought. Linda Yao, VP and GM of Hybrid Cloud and AI Solutions, has deployed AI across manufacturing, healthcare diagnostics, and enterprise operations. Her core thesis: most organizations fail at scale not because of use cases or data quality, but because they architect pilots in sandboxes that can't translate to production enterprise data centers.Through Lenovo's internal deployments and customer implementations, Yao has built a systematic approach to moving past experimentation. Her team developed what they call an AI library of battle tested use cases with proven deployment architectures, from computer vision systems that augment special education therapists to diagnostic tools preventing blindness in underserved regions. The methodology centers on a critical insight: ongoing monitoring and model management represents the capability gap causing implementations to plateau after initial deployment.Topics discussed:Five-stage methodology where ongoing monitoring of drift, model updates, and agent evolution separates successful deployments from stalled pilotsInfrastructure architecture coherence requirement between pilot and production environments to enable actual scalingEnterprise planning agents orchestrating across personal wellness, workload management, and digital employee experience using full device stack ownershipAI factory model for rapid diagnostic tool development and field distribution in resource constrained healthcare settingsHybrid deployment trend reversing decade long cloud first mentality due to data governance and compliance requirementsFour pillar readiness assessment covering security, data quality, people capability, and technology infrastructure before deploymentBuild leverage partner philosophy for full stack integration with pre tested component validation and reference architecturesLiquid cooling technology deployment addressing GPU energy consumption and data center sustainability constraints at scale

Jan 29, 2026 • 45min
How incident.io built AI agents that draft code fixes within 3 minutes of an alert
Lawrence Jones, product engineer at incident.io, describes how their AI incident response system evolved from basic log summaries to agents that analyze thousands of GitHub PRs and Slack messages to draft remediation pull requests within three minutes of an alert firing. The system doesn't pursue full automation because the real value lies elsewhere: eliminating the diagnostic work that consumes the first 30-60 minutes of incident response, and filtering out the false positives that wake engineers unnecessarily at 3am.The core architectural decision treats each organization's incident history as a unique immune system rather than fitting generic playbooks. By pre-processing and indexing how a specific company has resolved incidents across dimensions like affected teams, error patterns, and system dependencies, incident.io generates ephemeral runbooks that surface the 3-4 commands that actually worked last time this type of failure occurred. This approach emerged from recognizing that cross-customer meta-models fail because incident response is fundamentally organization-specific: one company's SEV-0 is an airline bankruptcy, another's is a stolen laptop.The engineering challenge centers on building trust with deeply skeptical SRE teams who view AI as non-deterministic chaos in their deterministic infrastructure. Lawrence's team addresses this through custom Go tooling that enables backtest-driven development: they rerun thousands of historical investigations with different model configurations and prompt changes, then use precision-focused scorecards to prove improvements objectively before deploying. This workflow revealed that traditional product engineers struggle with AI's slow evaluation cycles, while the team succeeded by hiring for methodical ownership over velocity.Topics discussed:Balancing precision versus recall in agent outputs to earn trust from SRE teams who are "hardcore AI holdouts"Pre-processing incident artifacts (PRs, Slack threads, transcripts) into queryable indexes that cross-reference team ownership, system dependencies, and historical resolution patternsModel selection strategy: GPT-4.1 for cost-effective daily operations, Claude Sonnet for superior code analysis and agentic planning loopsBacktest infrastructure that reruns thousands of past investigations with modified prompts to objectively validate changes through scorecard comparisonsBuilding ephemeral runbooks by extracting which historical commands and fixes worked for similar incidents, filtered by what the organization learned NOT to do in subsequent incidentsPrioritizing alert noise reduction over autonomous remediation because the false positive problem has clearer ROI and lower riskWhy AI engineering teams fail when staffed with traditional engineers optimized for fast feedback loops rather than tolerance for non-deterministic iterationBuilding entirely custom tooling in Go without vendor frameworks due to early ecosystem constraints and desire for native product integrationThe evaluation problem where only engineers who invested hundreds of hours building a system can predict how prompt changes cascade through multi-step agentic workflows

Jan 16, 2026 • 47min
Building AI agents for infrastructure where one mistake makes Wall Street Journal headlines
Alexander Page transitioned from sales engineer to engineering director by prototyping LLM applications after ChatGPT's launch, moving from initial prototype to customer GA in under four months. At Big Panda, he's building Biggy, an AIOps co-pilot where reliability isn't negotiable: a wrong automation execution at a major bank could make headlines.Big Panda's core platform correlates alerts from 10-50 monitoring tools per customer into unified incidents. Biggy operates at L2/L3 escalation: investigating root causes through live system queries, surfacing remediation options from Ansible playbooks, and managing incident workflows. The architecture challenge is building agents that traverse ServiceNow, Dynatrace, New Relic, and other APIs while maintaining human approval gates for any write operations in production environments.Page's team invested months building a dedicated multi-agent system (15-20 steps with nested agent teams) solely for knowledge graph operations. The insertion pipeline transforms unstructured data like Slack threads, call transcripts, and technical PDFs with images into graph representations, validating against existing state before committing changes. This architectural discipline makes retrieval straightforward and enables users to correct outdated context directly, updating graph relationships in real-time. Where vector search finds similar past incidents, the knowledge graph traces server dependencies to surface common root causes across connected infrastructure.Topics discussed:Moving LLM prototypes to production in months during GPT-3.5 era by focusing on customer design partnershipsEvaluating agentic systems by validating execution paths rather than response outputs in non-deterministic environmentsBuilding tool-specific agents for monitoring platforms lacking native MCP implementationsArchitecting multi-agent knowledge graph insertion systems that validate state before write operationsImplementing approval workflows for automation execution in high-consequence infrastructure environmentsDesigning RAG retrieval using fusion techniques, hypothetical document embeddings, and re-representation at indexingScaling design partnerships as extended product development without losing broader market applicabilitySeparating read-only investigation agents from write-capable automation agents based on failure consequence modeling

Dec 18, 2025 • 45min
ACC’s Dr. Ami Bhatt: AI Pilots Fail Without Implementation Planning
Dr. Ami Bhatt's team at the American College of Cardiology found that most FDA-approved cardiovascular AI tools sit unused within three years. The barrier isn't regulatory approval or technical accuracy. It's implementation infrastructure. Without deployment workflows, communication campaigns, and technical integration planning, even validated tools fail at scale.
Bhatt distinguishes "collaborative intelligence" from "augmented intelligence" because collaboration acknowledges that physicians must co-design algorithms, determine deployment contexts, and iterate on outputs that won't be 100% correct. Augmentation falsely suggests AI works flawlessly out of the box, setting unrealistic expectations that kill adoption when tools underperform in production.
Her risk stratification approach prioritizes low-risk patients with high population impact over complex diagnostics. Newly diagnosed hypertension patients (affecting 1 in 2 people, 60% undiagnosed) are clinically low-risk today but drive massive long-term costs if untreated. These populations deliver better ROI than edge cases but require moving from episodic hospital care to continuous monitoring infrastructure that most health systems lack.
Topics discussed:
Risk stratification methodology prioritizing low-risk, high-impact patient populations
Infrastructure gaps between FDA approval and scaled deployment
Real-world evidence approaches for AI validation in lower-risk categories
Synthetic data sets from cardiovascular registries for external company testing
Administrative workflow automation through voice-to-text and prior authorization tools
Apple Watch data integration protocols solving wearable ingestion problems
Three-part startup evaluation: domain expertise, technical iteration capacity, implementation planning
Real-time triage systems reordering diagnostic queues by urgency

Dec 4, 2025 • 40min
Usertesting's Michael Domanic: Hallucination Fears Mean You're Building Assistants, Not Thought Partners
UserTesting deployed 700+ custom GPTs across 800 employees, but Michael Domanic's core insight cuts against conventional wisdom: organizations fixated on hallucination risks are solving the wrong problem. That concern reveals they're building assistants for summarization when transformational value lives in using AI as strategic thought partner. This reframe shifts evaluation criteria entirely.
Michael connects today's moment to 2015's Facebook Messenger bot collapse, when Wit.ai integration promised conversational commerce that fell flat. The inversion matters: that cycle failed because NLP couldn't meet expectations shaped by decades of sci-fi. Today foundation models outpace organizational capacity to deploy responsibly, creating an obligation to guide employees through transformation rather than just chase efficiency.
His vendor evaluation cuts through conference floor noise. When teams pitch solutions, first question: can we build this with a custom GPT in 20 minutes? Most pitches are wrappers that don't justify $40K spend. For legitimate orchestration needs, security standards and low-code accessibility matter more than demos.
Topics discussed:
Using AI as thought partner for strategic problem-solving versus summarization and content generation tasks
Deploying custom GPTs at scale through OKR-building tools that demonstrated broad organizational application
Treating conscientious objectors as essential partners in responsible deployment rather than adoption blockers
Filtering vendor pitches by testing whether custom GPT builds deliver equivalent functionality first
Prioritizing previously impossible work over operational efficiency when setting transformation strategy
Building agent chains for customer churn signal monitoring while maintaining human decision authority
Implementing security-first evaluation for enterprise orchestration platforms with low-code requirements
Creating automated AI news digests using agent workflows and Notebook LM audio synthesis

Nov 20, 2025 • 46min
Christian Napier On Government AI Deployment: Why Productivity Tools Worked But Chatbots Didn't
Utah's tax chatbot pilot exposed the non-deterministic problem every enterprise faces: initial LLM accuracy hit 65-70% when judged by expert panels, with another 20-25% partially correct. After months of iteration, three of four vendors delivered strong enough results for Utah to make a vendor selection and begin production deployment. Christian Napier, Director of AI for Utah's Division of Technology Services, explains why the gap between proof of concept and production is where AI budgets and timelines collapse.His team deployed Gemini across state agencies with over 9,000 active users collectively saving nearly 12,000 hours per week. Meanwhile, agency-specific knowledge chatbots struggle with optional adoption, competing against decades of human expertise.The bigger constraint isn't technical. Vendor quotes for the same citizen-facing solution dropped from eight figures to five during negotiations as pricing models shifted. When procurement cycles run 18 months and foundation models deprecate quarterly, traditional budgeting breaks.Topics discussed:Expert panel evaluation methodology for testing LLM accuracy in regulated tax advice scenariosLow-code AI platforms reaching capability limits on complex use cases requiring pro-code solutionsAvoiding $5 million in potential annual licensing costs through Google Workspace AI integration timingTracking self-reported productivity gains of 12,000 hours weekly across 9,000 active usersAI Factory process requiring privacy impact assessments and security reviews before any pilotsVendor pricing dropping from eight-figure to five-figure quotes as commercial models evolvedForcing adoption through infrastructure replacement when legacy HR platform went read-onlySeparating automation opportunities from optional tools competing with existing workflowsDigital identity requirements for future agent-to-government transactions and authorizationRegulatory relief exploration for AI applications in licensed professions like mental health


