Ethical Machines

Reid Blackman
undefined
May 14, 2026 • 46min

AI Governance is Lagging

The Reuters Foundation recently conducted a global survey and found that most companies are lagging in their attempts to govern AI. For me the most surprising stat is that 85% of companies don’t have any training on AI risks for their employees. I think that’s just insane. Today my guests are Antonio Zappulla, CEO of the Reuters Foundtion, and Kaite Fowler, Director of Responsible Business. We talk about how they coinducted their research, their results, and what incentives there are for businesses to do better.Newsletter: https://www.trust.org/newsletter/Advertising Inquiries: https://redcircle.com/brands
undefined
4 snips
May 7, 2026 • 45min

Predictions are Commands

Carissa Véliz, Associate Professor at Oxford and author of Prophecy, explores how predictions in AI function as power plays. She contrasts human forecasts with weather, warns that large-scale AI creates monocultures, and argues that claims of inevitability can become commands. The conversation urges questioning predictions, protecting autonomy, and planning for uncertain futures.
undefined
May 3, 2026 • 1h

The Ethical Nightmare Challenge: Chapters 6-7 and Conclusion

They break down building rapid response ENC teams to spot and score AI nightmares. They explain a seven-step, repeatable method and the three kickoff questions every team should answer. They cover folding ENC into existing policies, tools, and compliance, plus how cross-functional webs grow organizational resilience.
undefined
May 1, 2026 • 49min

The Ethical Nightmare Challenge: Chapters 4-5

Chapter 4: The Standard Approach to Responsible AI Is Crumbling The Standard ApproachThe Madness in the MethodTurn That Smile Upside DownCats and Tigers, Oh My!Chapter 5: Why I Like Nightmares and You Should, Too The Power of NightmaresWhat Good Nightmares Look LikeAnd Now the Moment You've Been Waiting ForAdvertising Inquiries: https://redcircle.com/brands
undefined
Apr 30, 2026 • 1h 14min

The Ethical Nightmare Challenge: Chapters 2-3

Chapter Two: Things Get Complicated with Generative AISo Now We’re Going to Lose My Grandmother, AgainThe Creators’ Version of a Rough DraftThe Creators Align (Kind of)BigBusinessAIThe Master PrompterThe Changing AI Risk LandscapeChapter Three: Humans Had a Good Run, but Now I Bring You... AI Agents!How to Build an AI AgentAI Agent EcosystemsAgentic Sources of Ethical NightmaresThe Classic “But Humans Make Errors, Too!” ObjectionThe Ground Exploded Beneath Our FeetAfter the EarthquakeInterlude: Get a Grip, Man!Advertising Inquiries: https://redcircle.com/brands
undefined
Apr 23, 2026 • 45min

The Ethical Nightmare Challenge

A witty introduction to a new book on why traditional Responsible AI guidance breaks down with agentic systems. A cat vs tiger analogy explains the shift from narrow to generative AI. Practical steps for organizations to identify and train against AI nightmares are proposed. Legal, privacy, hallucination, bias, and automation risks are highlighted without technical jargon.
undefined
Apr 16, 2026 • 48min

Creating Universal Standards for AI Risk

ISO 42001 sounds serious. It's got a serious (and boring) name, it's backed by 60+ countries, and some companies seek ISO 42001 certification. But is the standard any good? Does it actually prevent harms? Can we have generic standards? And how can the standards be flexible enough to account for the fast paced change in the AI world? I’m a bit of a skeptic about all this, but my guest, Patrick Sullivan, VP of Strategy and Innovation at A-lign, is a true believer. And he makes a strong case. You decide if my skepticism is unwarranted.Advertising Inquiries: https://redcircle.com/brands
undefined
Apr 9, 2026 • 47min

Existentialist Risk

Technologist’s are racing to create AGI, artificial general intelligence. They also say we must align the AGI’s moral values with our own. But Professors Ariela Tubert and Justin Tiehen argue that’s impossible. Once you create an AGI, they say, you also give them the intellectual capacity needed for freedom, including the freedom to reject your given values. Originally aired in season 2. Advertising Inquiries: https://redcircle.com/brands
undefined
Apr 2, 2026 • 54min

Could AI Have Moral Worth?

My guest today, Josh Gellers, Dean at the University of North Florida, argues that AI has more awards. More specifically, he thinks that AI has been used to create new biological organisms that meet the criteria for moral worth. Does that mean that AI itself has moral worth? Should we think that if something is not natural it lacks moral worth? All this and more in today’s episodeAdvertising Inquiries: https://redcircle.com/brands
undefined
Mar 26, 2026 • 41min

Don’t Believe the Hype About AI Job Displacement

My guests today - Professor Kate Vredenburgh and VR specialist Lauren Wong - argue that there are at least two strong reasons for calming down: first, AI isn’t good enough to replace us at our jobs. Second, even if they were, it’s up to us to develop AI in a way that supports rather than replaces us. We also talk about whether AI adoption is suffering for the same reasons the metaverse was never successful: we’re failing to appreciate how to get people to justifiably buy in to the technology.Advertising Inquiries: https://redcircle.com/brands

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app