Kerry Sheehan, an award-winning AI policy expert and former strategic advisor to the Alan Turing Institute, delves into the role of ethics in AI development. She likens AI guardrails to bowling bumpers that ensure safe innovation while emphasizing the need for diverse teams to prevent bias. Kerry shares insights on the balance between rapid innovation and ethical governance, discussing challenges like explainability and regulatory compliance. Her work aims to create a shared language for responsible AI development, pursuing a future where technology serves everyone fairly.
34:54
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Diversity Prevents AI Bias
Diverse AI development teams prevent bias and ensure systems serve all populations fairly.
Bias acts like bad seasoning that can ruin an otherwise good AI "recipe."
question_answer ANECDOTE
GPT-3.5 Diverse Testing Example
OpenAI tested GPT-3.5 with 40 diverse people to identify ethical issues before release.
Testing across varied demographics helps catch biases and improves system fairness.
question_answer ANECDOTE
Government AI for Farmers
Kerry worked on an automated system to help farmers get subsidies efficiently.
They balanced automation with fairness and engagement to ensure trust and inclusion.
Get the Snipd Podcast app to discover more snips from this episode
What happens when we prioritise innovation over ethics in AI development? For the 100th episode of the Digitally Curious Podcast, Kerry Sheehan, a machine learning specialist with a fascinating journey from journalism to AI policy, explores this critical question as she shares powerful insights on responsible AI implementation.
Kerry takes us on a compelling exploration of AI guardrails, comparing them to bowling alley bumpers that prevent technologies from causing harm. Her work with the British Standards Institute has helped establish frameworks rooted in fairness, transparency, and human oversight – creating what she calls "shared language for responsible development" without stifling innovation.
The conversation reveals profound insights about diversity in AI development teams. "If the teams building AI systems don't represent those that the end results will serve, it's not ethical," Kerry asserts. She compares bias to bad seasoning that ruins an otherwise excellent recipe, highlighting how diverse perspectives throughout the development lifecycle are essential for creating fair, beneficial systems.
Kerry's expertise shines as she discusses emerging ethical challenges in AI, from foundation models to synthetic data and agentic systems. She advocates for guardrails that function as supportive scaffolding rather than restrictive handcuffs – principle-driven frameworks with room for context that allow developers to be agile while maintaining ethical boundaries.
What makes this episode particularly valuable are the actionable takeaways: audit your existing AI systems for fairness, develop clear governance frameworks you could confidently explain to others, add ethical reviews to project boards, and include people with diverse lived experiences in your design meetings. These practical steps can help organisations build AI systems that truly work for everyone, not just the privileged few.
This is an important conversation about making AI work for humanity rather than against it. Kerry's perspective will transform how you think about responsible technology implementation in your organisation.