
ACM ByteCast Ray Eitel-Porter - Episode 82
Feb 26, 2026
Ray Eitel‑Porter, AI safety and ethics expert and author of Governing the Machine, discusses his path from privacy concerns to responsible AI work. He highlights who should own AI governance in healthcare and why governance speeds safe adoption. He covers practical models, from crowdsourced checks to regulator sandboxes, and how to embed governance into AI workflows as technologies evolve.
AI Snips
Chapters
Books
Transcript
Episode notes
Bias And Privacy Were The First AI Roadblocks
- Early AI adoption was slowed by two primary risks: biased historical datasets and data privacy breaches.
- Ray Eitel-Porter observed biased training data can perpetuate harm and privacy leaks can reveal confidential information, blocking organizations from using AI confidently.
Invisible Women Sparked A Shift To Responsible AI
- Reading Invisible Women convinced Ray that nonrepresentative data causes real-world harms like crash-test dummies built to male bodies.
- That concrete example showed how historical data and testing practices left women less protected and opened Ray's eyes to biased datasets.
Build Governance Into Every Development Stage
- Embed an AI governance mindset throughout the development lifecycle rather than as a final checklist gate.
- Ask early: should we use AI, which data to select, and run bias/accuracy checks continuously so governance approval is routine at the end.



