Into AI Safety

Getting Agentic w/ Alistair Lowe-Norris

Oct 20, 2025
Alistair Lowe-Norris, Chief Responsible AI Officer at Iridius, dives into the practical side of building safe AI systems. He addresses the crucial need for compliance standards and the potential of procurement practices to ensure responsible AI adoption. Alistair highlights gaps between company promises and actual safety measures, discussing models like robot avatars and the risks associated with AI expansion. He also emphasizes the importance of transparency and continuous oversight to maintain safety in AI practices.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Use AI For Triage, Humans For Hard Cases

  • Tier-one support tasks are ripe for automation, while complex issues still need human experts.
  • Alistair recommends AI triage for FAQs and reserving humans for high-complexity problems.
ADVICE

Prefer RAG And Tailored Explanations

  • Use retrieval-augmented approaches or larger context windows rather than one-size LLMs for customer answers.
  • Match explanations to the user’s language and comprehension level to improve outcomes.
INSIGHT

Language Equity Needs Concerted Investment

  • Foundation models currently under-serve non-Western languages and dialects, creating equity gaps.
  • Alistair calls for benchmarks and investment to force parity across languages and diasporas.
Get the Snipd Podcast app to discover more snips from this episode
Get the app