The Bunker – News without the nonsense

Law & Order: Artificial Intelligence Unit – How do we police A.I.?

Mar 12, 2026
Dr Federica Fedorczyk, Research Fellow and AI ethics specialist studying AI’s impact on criminal justice. She explores a wide range of AI harms from deepfakes and election interference to surveillance and cyberattacks. Discussion covers mapping foreseeable misuse, platform and product liability, regulatory choices across jurisdictions, and how non-regulatory tools like courts and social pressure shape outcomes.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI Enables Many Distinct Harms Beyond Deepfakes

  • AI enables a wide range of illegal harms beyond deepfakes, from election interference to automated cyberattacks.
  • Federica Fedorczyk lists examples: mass surveillance, militarised automation, discrimination by emotion, and behaviour manipulation of minors.
INSIGHT

Non‑Consensual Sexual Deepfakes Are Mass Scale

  • Non-consensual sexual deepfakes are widespread and primarily victimize women and children.
  • Federica notes nearly 3 million sexual deepfakes were created over 10 days on a major public platform, showing this is mainstream behaviour not fringe.
ADVICE

Anticipate Misuse And Build Guardrails From Design

  • Map and anticipate both short‑term and long‑term misuse when designing AI systems.
  • Federica urges researchers to predict risks (deepfakes existed since 2017) and build guardrails from the design stage onward.
Get the Snipd Podcast app to discover more snips from this episode
Get the app