

The Road to Accountable AI
Kevin Werbach
Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.
Episodes
Mentioned books

Apr 2, 2026 • 33min
Richa Kaul, Complyance: Asking the Right Questions
Richa Kaul breaks down the AI risk landscape for enterprises, and argues that the key to managing all of them is resisting the urge to sensationalize. Kaul offers a candid assessment of where enterprise AI governance committees are falling short, noting that many lack the technical fluency to ask vendors the right questions, such as where customer data goes, whether it trains other clients' models, and what specific steps reduce hallucination. She suggests that market-driven security standards like SOC-2 and ISO 27001 often matter more in practice than government regulation, creating a "beautiful ecosystem" where risk management runs ahead of the law. Looking forward, she addresses the growing challenge of agentic AI systems that make decisions autonomously, offering a deceptively simple prescription: Map every action an agent can take, know where your highest risk sits, identify the critical decision points, and demand human sign-off at each one/ Richa Kaul is the founder and CEO of Complyance, an AI-native enterprise governance, risk, and compliance (GRC) platform. Before founding Complyance, she was Chief Strategy Officer at ContractPodAi, a legal technology company, and previously served as Managing Director at the Virginia Economic Development Partnership and as a management consultant at McKinsey. Transcript Complyance Raises $20M to Help Companies Manage Risk and Compliance (TechCrunch, February 11, 2026)

Mar 26, 2026 • 34min
Michael Horowitz, UPenn: Governing AI That's Designed to Kill
How AI is, could, and shouldn't be used in military and other national security contexts is a topic of growing importance. Recent conflicts on the battlefield, and between the U.S. military and a major AI lab, are forcing conversations about legal, ethical, and appropriate business limitations for increasingly powerful AI tools. Michael Horowitz, a Political Science professor and Director of Perry World House at the University of Pennsylvania, is one of the world's leading experts on military AI and autonomous weapons. In this episode, drawing on his two stints in the U.S. Department of Defense, Horowitz walks through the major buckets of military AI use. He explains why militaries are, in some ways, more incentivized than any other institution to get AI governance right, but genuine tensions among speed, effectiveness, and meaningful human control can make responsible military AI difficult in practice. We cover Anthropic's recent dispute with the Pentagon as a case study in the fragile and increasingly consequential relationship between Silicon Valley and the defense establishment. Michael C. Horowitz is the Richard Perry Professor of Political Science and Director of Perry World House at the University of Pennsylvania, and a Senior Fellow for Technology and Innovation at the Council on Foreign Relations. From 2022 to 2024, he served as U.S. Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities, where he was the principal author of the U.S. Political Declaration on Responsible Military Use of AI and Autonomy. He is the author of The Diffusion of Military Power: Causes and Consequences for International Politics and co-author of Why Leaders Fight. Transcript Battles of Precise Mass: Technology Is Remaking War — and America Must Adapt (Foreign Affairs, 2024) The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons (Daedalus, 2016) Rules of Engagement (Penn Gazette, 2025)

Mar 19, 2026 • 33min
Tanvi Singh, Ekta AI: The Case for Sovereign AI
Tanvi Singh draws on over two decades of building and governing AI systems inside global banks to make a provocative case: you cannot be accountable for decisions you do not control. Enterprises are consuming intelligence through models they don't own, can't explain, and didn't train. Singh reframes sovereignty beyond data center locations and infrastructure, to control across the entire stack, so that an organization's AI reflects its own values, laws, and culture. Whlile frontier LLMs will continue to dominate the consumer and retail market, she argues that domain-specific models will be important for enterprise and regulated use cases, offering better accuracy at dramatically lower cost. The conversation also touches on Singh's engagement with the Vatican's Pontifical Academy of Sciences around AI ethics, which has worked on benchmarks that reflect institutional values rather than defaulting to the cultural norms baked into large internet-trained models. Tanvi Singh is the Co-Founder and CEO of Ekta Inc., a sovereign AI platform company building domain-specific foundation models for governments and regulated industries. She previously served as Group Head of AI, Data & Analytics at UBS and held senior technology leadership roles at Credit Suisse, GE, and Monsanto. She is the founder and managing partner of Nirmata-ai Ventures, a Zurich-based deep-tech venture fund, and serves as a board member of the Global Blockchain Business Council and GirlsCanCode. Transcript Sovereign AI: Why States and Institutions Have to Take Back Their Digital Intelligence (HSToday, co-authored with Thomas Cellucci) Ekta AI

Mar 12, 2026 • 33min
Ray Eitel-Porter, Co-Author of Governing the Machine: The Confidence to Use AI
Ray Eitel-Porter, former Global Lead for Responsible AI at Accenture and co-author of the new book, Governing the Machine, discusses how enterprises can move from abstract AI principles to practical governance. He emphasizes that organizations can only realize AI's benefits if responsibility is embedded into everyday business processes rather than treated as a standalone compliance exercise. Drawing on his experience leading global data and AI programs, Eitel-Porter explains how the release of ChatGPT transformed enterprise attitudes toward AI, accelerating adoption while exposing risks such as hallucinations, reliability failures, and reputational harm. Effective governance has evolved from static principles to operational controls, including workflow checkpoints, red teaming, and technical guardrails, particularly for generative AI systems with inherently probabilistic outputs. On risk, he stresses that not all AI use cases require the same level of scrutiny; governance should scale with potential impact and harm, focusing on what an AI system is intended to do so that non-technical teams can surface high-risk use cases without incentives to downplay risk. On regulation, Eitel-Porter notes that despite uncertainty around the EU AI Act, many multinational companies are treating it as a global baseline, similar to GDPR, while contrasting this with more deregulatory signals from the United States and questioning the global influence of the UK's middle-ground approach. He also shares insights from Governing the Machine, co-authored with Miriam Bogle and Paul Donkhan, emphasizing that AI governance is not a barrier to innovation but the foundation that allows organizations to deploy AI at scale with confidence and control. Ray Eitel-Porter is a Senior Advisor at Accenture and the former Global Lead for Responsible AI, where he designed and scaled AI governance programs for multinational organizations. He previously led Accenture's data and AI practice in the UK and has over a decade of experience advising companies on responsible AI, data governance, and emerging technology risk. Eitel-Porter is the co-author of Governing the Machine: How to Navigate the Risks of AI and Unlock Its True Potential (Bloomsbury, 2025) and has led multi-year programs across public and private sectors, including global banks, retailers, and health brands. Transcript Governing the Machine (Bloomsbury 2025) Lessons from the Frontline – Designing and Implementing AI Governance (AI Journal)

Dec 18, 2025 • 38min
Alexandru Voica: Responsible AI Video
Alexandru Voica, Head of Corporate Affairs and Policy at Synthesia, discusses how the world's largest enterprise AI video platform has approached trust and safety from day one. He explains Synthesia's "three C's" framework—consent, control, and collaboration: never creating digital replicas without explicit permission, moderating every video before rendering, and engaging with policymakers to shape practical regulation. Voica acknowledges these safeguards have cost some business, but argues that for enterprise sales, trust is competitively essential. The company's content moderation has evolved from simple keyword detection to sophisticated LLM-based analysis, recently withstanding a rigorous public red team test organized by NIST and Humane Intelligence. Voica criticizes the EU AI Act's approach of regulating how AI systems are built rather than focusing on harmful outcomes, noting that smaller models can now match frontier capabilities while evading compute-threshold regulations. He points to the UK's outcome-focused approach—like criminalizing non-consensual deepfake pornography—as more effective. On adoption, Voica argues that AI companies should submit to rigorous third-party audits using ISO standards rather than publishing philosophical position papers—the thesis of his essay "Audits, Not Essays." The conversation closes personally: growing up in 1990s Romania with rare access to English tutoring, Voica sees AI-powered personalized education as a transformative opportunity to democratize learning. Alexandru Voica is the Head of Corporate Affairs and Policy at Synthesia, the UK's largest generative AI company and the world's leading AI video platform. He has worked in the technology industry for over 15 years, holding public affairs and engineering roles at Meta, NetEase, Ocado, and Arm. Voica holds an MSc in Computer Science from the Sant'Anna School of Advanced Studies and serves as an advisor to MBZUAI, the world's first AI university. Transcript Audits, Not Essays: How to Win Trust for Enterprise AI (Transformer) Synthesia's Content Moderation Systems Withstand Rigorous NIST, Humane Intelligence Red Team Test (Synthesia) Computerspeak Newsletter

Dec 11, 2025 • 34min
Blake Hall: Safeguarding Identity in the AI Era
In this episode, Blake Hall, CEO of ID.me, discusses the massive escalation in online fraud driven by generative AI, noting that attacks have evolved from "Nigerian prince" scams to sophisticated, scalable social engineering campaigns that threaten even the most digital-savvy users. He explains that traditional knowledge-based verification methods are now obsolete due to data breaches, shifting the security battleground to biometric and possession-based verification. Hall details how his company uses advanced techniques—like analyzing light refraction on skin versus screens—to detect deepfakes, while emphasizing a "best of breed" approach that relies on government-tested vendors. Beyond the threats, Hall outlines a positive vision for a digital wallet that functions as a user-controlled "digital twin," allowing individuals to share only necessary data (tokenized identity) rather than overexposing personal information. He argues that government agencies must play a stronger role in validating core identity attributes to stop synthetic fraud and suggests that future AI "agents" will rely on cryptographically signed credentials to act on our behalf securely. Ultimately, he advocates for a model where companies "sell trust, not data," empowering users to control their own digital identity across finance, healthcare, and government services. Blake Hall is the Co-Founder and CEO of ID.me, a digital identity network with over 150 million members that simplifies how individuals prove and share their identity online. A former U.S. Army Ranger, Hall led a reconnaissance platoon in Iraq and was awarded two Bronze Stars, including one for valor, before earning his MBA from Harvard Business School. He has been recognized as CEO of the Year by One World Identity and an Entrepreneur of the Year by Ernst & Young for his work in pioneering secure, user-centric digital identity solutions. Transcript He Once Hunted Terrorists in Iraq. Now He Runs a $2 Billion Identity Verification Company (Inc., November 11, 2025) "No Identity Left Behind": How Identity Verification Can Improve Digital Equity (ID.me)

Dec 4, 2025 • 32min
Mitch Kapor: AI Gap-Closing
Legendary entrepreneur and investor Mitch Kapor draws on his decades of experience to argue that while AI represents a massive wave of disruptive innovation, it also represents an opportunity to avoid mistakes made with social media and the early internet. In this episode, he contends that technologists tend toward over-optimism about technology solving human problems while underestimating downsides. Self-regulation by large AI companies like OpenAI and Anthropic is likely to fail, he suggests, because incentives to aggregate power and wealth are too strong, requiring external pressure and oversight. Kapor explains that his responsible investing approach at his venture capital firm, Kapor Capital, focuses on gap-closing rather than diversity for its own sake, funding startups that address structural inequalities in access, opportunity, or outcomes, regardless of founder demographics. He discusses the Humanity AI initiative and argues that philanthropy needs to develop AI literacy and technical capacity, with some foundations hiring chief technology officers to effectively engage with these issues. He believes targeted interventions can create meaningful change without matching the massive investments of the major AI labs. Kapor expresses hope that a younger generation of leaders in tech and philanthropy can step up to make positive differences, emphasizing that his generation should empower them rather than occupying seats at the table. Mitch Kapor is a pioneering technology entrepreneur, investor, and philanthropist who founded Lotus Development Corporation and created Lotus 1-2-3, the breakthrough spreadsheet software that helped establish the PC software industry in the 1980s. He co-founded the Electronic Frontier Foundation to advocate for digital rights and civil liberties, and later established Kapor Capital with his wife Freada Kapor Klein to invest in startups that close gaps of access, opportunity, and outcome for underrepresented communities. Kapor recently completed a masters degree at the MIT Sloan School focused on gap-closing investing, returning to finish what he started 45 years earlier when he left MIT to pursue his career in Silicon Valley. He serves on the steering committee of Humanity AI, a $500 million initiative to ensure AI benefits society broadly.

Nov 20, 2025 • 35min
Brad Carson: Sharing AI's Bounty
Former Congressman and Pentagon official Brad Carson discusses his organization, Americans for Responsible Innovation (ARI), which seeks to bridge the gap between immediate AI harms like and catastrophic safety risks, while bringing deep Capitol Hill expertise to the AI conversation . He argues that unlike previous innovations such as electricity or the automobile, AI has been deeply unpopular with the public from the start, creating a rare bipartisan alignment among those skeptical of its power and impacts. This creates openings for productive discussions about AI policy. Drawing on his military experience, Carson suggests that while AI will shorten the kill chain, it won't fundamentally change the human nature of warfare, and he warns against the US military's tendency to seek technical solutions to human problems . The conversation covers current policy debates, highlighting the necessity of regulating the design of models rather than just their deployment, and the importance of export controls to maintain the West's advantage in compute . Ultimately, Carson emphasizes that for AI to succeed politically, the "bounty" of this technology must be shared broadly to avoid tearing apart the social fabric Brad Carson is the founder and president of Americans for Responsible Innovation (ARI), an organization dedicated to lobbying for policy that ensures artificial intelligence benefits the public interest. A former Rhodes Scholar, Carson has had a diverse career in public service, having served as a U.S. Congressman from Oklahoma, the Undersecretary of the Army, and the acting Undersecretary of Defense for Personnel and Readiness . He also served as a university president and deployed to Iraq in 2008 . Transcript Former TU President Brad Carson Pushes for Strong AI Guardrails

Nov 13, 2025 • 36min
Oliver Patel: Sharing Frameworks for AI Governance
Oliver Patel has built a sizeable online following for his social media posts and Substack about enterprise AI governance, using clever acronyms and visual frameworks to distill down insights based on his experience at AstraZeneca, a major global pharmaceutical company. In this episode, he details his career journey from academic theory to government policy and now practical application, and offers insights for those new to the field. He argues that effective enterprise AI governance requires being pragmatic and picking your battles, since the role isn't to stop AI adoption but to enable organizations to adopt it safely and responsibly at speed and scale. He notes that core pillars of modern AI governance, such as AI literacy, risk classification, and maintaining an AI inventory, are incorporated into the EU AI Act and thus essential for compliance. Looking forward, Patel identifies AI democratization—how to govern AI when everyone in the workforce can use and build it—as the biggest hurdle, and offers thougths about how enteprises can respond. Oliver Patel is the Head of Enterprise AI Governance at AstraZeneca. Before moving into the corporate sector, he worked for the UK government as Head of Inbound Data Flows, where he focused on data policy and international data transfers, and was a researcher at University College London. He serves as an IAPP Faculty Member and a member of the OECD's Expert Group on AI Risk. His forthcoming book, Fundamentals of AI Governance, will be released in early 2026. Transcript Enterprise AI Governance Substack Top 10 Challenges for AI Governance Leaders in 2025 (Part 1) Fundamentals of AI Governance book page

Nov 6, 2025 • 34min
Ravit Dotan: Rethinking AI Ethics
Ravit Dotan argues that the primary barrier to accountable AI is not a lack of ethical clarity, but organizational roadblocks. While companies often understand what they should do, the real challenge is organizational dynamics that prevent execution—AI ethics has been shunted into separate teams lacking power and resources, with incentive structures that discourage engineers from raising concerns. Drawing on work with organizational psychologists, she emphasizes that frameworks prescribe what systems companies should have but ignore how to navigate organizational realities. The key insight: responsible AI can't be a separate compliance exercise but must be embedded organically into how people work. Ravit discusses a recent shift in her orientation from focusing solely on governance frameworks to teaching people how to use AI thoughtfully. She critiques "take-out mode" where users passively order finished outputs, which undermines skills and critical review. The solution isn't just better governance, but teaching workers how to incorporate responsible AI practices into their actual workflows. Dr. Ravit Dotan is the founder and CEO of TechBetter, an AI ethics consulting firm, and Director of the Collaborative AI Responsibility (CAIR) Lab at the University of Pittsburgh. She holds a Ph.D. in Philosophy from UC Berkeley and has been named one of the "100 Brilliant Women in AI Ethics" (2023), and was a finalist for "Responsible AI Leader of the Year" (2025). Since 2021, she has consulted with tech companies, investors, and local governments on responsible AI. Her recent work emphasizes teaching people to use AI thoughtfully while maintaining their agency and skills. Her work has been featured in The New York Times, CNBC, Financial Times, and TechCrunch. Transcript My New Path in AI Ethics (October 2025) The Values Encoded in Machine Learning Research (FAccT 2022 Distinguished Paper Award) - Responsible AI Maturity Framework


