
Can You Certify Good AI use? This Organization Thinks So
The Media Copilot
Disclosure: When and How to Label AI
Murphy describes disclosure principles, adopting IAB guidance, and focusing on whether end users would be misled by AI use.
As AI reshapes journalism and media, Richard Murphy of the Alliance for Audited Media explains why the industry needs actual standards.
AI is no longer experimental in media. It is operational.
From drafting articles to generating images to influencing distribution, artificial intelligence is now embedded across the entire content pipeline in many organizations. But as adoption accelerates, trust is breaking down just as fast.
In this episode of The Media Copilot, Pete Pachal talks with Richard Murphy, CEO of the Alliance for Audited Media, to unpack a growing industry response: ethical AI certification.
Murphy explains how publishers, advertisers, and audiences are all asking the same question in different ways: How do we know what is real, who created it, and whether we can trust it? The answer, at least in part, may lie in standards.
Drawing from AAM’s newly developed framework, Murphy walks through the pillars of responsible AI use, from transparency and disclosure to human oversight and data protection. The goal is not to slow innovation, but to create guardrails that keep media credible in an era where AI can generate anything.
Why This Matters
Media has always relied on trust as its currency. AI is testing that foundation.
When audiences cannot tell whether content is human-created, AI-assisted, or fully synthetic, credibility becomes fragile. At the same time, advertisers and partners are demanding proof that what they are funding or distributing meets ethical standards.
This is where certification enters the picture.
Ethical AI frameworks are quickly becoming more than best practice. They are positioning themselves as a competitive advantage, a compliance strategy, and potentially a defense against future regulation.
The bigger shift is this: AI is not just changing how content is created. It is redefining what accountability looks like in media.
What we cover
- What “ethical AI certification” actually means in practice
- The 8 pillars of responsible AI use in media organizations
- Why disclosure is moving from optional to essential
- The difference between AI-assisted vs fully AI-generated content
- Where most trust failures are happening today
- Why self-regulation may be the industry’s best shot before government intervention
- How AI is impacting not just content creation, but distribution and business models
- The growing role of advertisers, partners, and audiences in demanding transparency
About the 👤 Guest
LinkedIn (Personal Profile):
https://www.linkedin.com/in/rmurphy01
AAM Leadership Bio: https://auditedmedia.com/about/leadership
Alliance for Audited Media): https://auditedmedia.com
Digital Content Next (Articles): https://digitalcontentnext.org/blog/author/richmurphy/
About the show: To explore more conversations like this and see what’s new, visit the freshly updated Media Copilot website at mediacopilot.ai. You’ll find new episodes, expanded resources, and tools designed for journalists, communicators, and media leaders navigating the fast-changing world of AI. It’s the home base for everything Media Copilot and it’s just getting started.
Enjoyed this episode?
Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the YouTube channel. For more AI tools and resources built for media professionals, visit MediaCopilot.ai.
Produced by Pete Pachal and Executive Producer Michele Musso
Edited by the Musso Media Team . Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0 All rights reserved. © AnyWho Media 2026


