The McKinsey Podcast

Trust in the age of agents

83 snips
Mar 5, 2026
Rich Isenberg, a McKinsey partner who advises leaders on scaling agentic AI and risk, joins to discuss trust and accountability for autonomous systems. He covers how agency shifts decision rights and why governance, non-bypassable guardrails, and automated monitoring matter. Short pilots, clear escalation paths, and kill switches are highlighted as ways to scale safely.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Autonomy Levels Redefine Risk Taxonomy

  • Autonomy levels change risk: co-pilots raise accuracy concerns, semi-autonomous agents introduce financial risk, and fully autonomous agents require boundary enforcement.
  • Cross-agent data poisoning can ripple failures across operations, finance, and customer service.
ADVICE

Make Guardrails Unbypassable

  • Enforce guardrails so they cannot be bypassed; regulators like the EU AI Act help by classifying use cases and accountability.
  • Isenberg: monitor bias controls and factual checks in real time and prevent shadow agents.
ADVICE

Redesign The Operating Model First

  • Redesign the operating model: define decision rights, accountability, escalation, and controls rather than treating agents as a tech upgrade.
  • Isenberg warns that without this redesign leaders are merely hoping the system behaves.
Get the Snipd Podcast app to discover more snips from this episode
Get the app