DataFramed

#350 How to Make Hard Choices in AI with Atay Kozlovski, Researcher at the University of Zurich

29 snips
Mar 9, 2026
Atay Kozlovski, a postdoctoral researcher in AI ethics at the University of Zurich, studies normative ethics, meaningful human control, and deepfakes. He discusses automation and algorithmic bias, when people fail to override AI, and the risks of high‑stakes systems in healthcare, welfare, and immigration. He also tackles deepfakes, consent for digital recreations, and why cautious, risk‑averse design matters.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Create Clear Protocols For Obvious AI Errors

  • Build protocols and training so workers know how to handle clear AI errors instead of being stumped and creating queues or delays.
  • Use explicit processes at touchpoints like passport control so staff can override or verify AI outputs.
ANECDOTE

Lavender Kill List Case Study

  • Atay Kozlovski described Lavender, an IDF recommendation system that scored 2.3 million Gazans to produce kill lists and sped target validation from ~100/week to ~1,000/day.
  • Internal testing showed ~10% false positives, yet analysts had ~20 seconds to approve targets, causing egregious human rights harms.
ANECDOTE

Doctors Bypassing Discharge Safeguards

  • In a hospital pilot for an LLM discharge-letter tool, doctors used AI summaries to catch up after absences, bypassing built guardrails and supervisor checks.
  • This emergent use undermined intended safety constraints and illustrates unpredictable 'in-the-wild' uses.
Get the Snipd Podcast app to discover more snips from this episode
Get the app