The FAIK Files

Going Deep on Deepfakes (feat. Hany Farid)

27 snips
Mar 13, 2026
Hany Farid, a computer science professor and digital forensics expert who co-founded Get Real Security, joins to unpack deepfake harms. He discusses physics-based detection, why real-time streams can be easier to defend, the danger of AI “enhancement” hallucinations, voice cloning risks, and practical fixes like watermarks and C2PA for media provenance.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Three Distinct Deepfake Threats

  • Deepfakes include full fabrications, recreations of real events with false imagery, and 'enhancement' hallucinations that fabricate missing detail.
  • Hany Farid used Maduro and staged military photos plus CSI-style mask removals to show how enhancement invents pixels rather than revealing truth.
ANECDOTE

Unmasking Test Produced Wrong Faces

  • Perry uploaded a balaclava photo and asked ChatGPT to remove the mask, producing a plausible but incorrect face.
  • Hany described a study showing reconstructed faces from masked images rarely match and AI offers no uncertainty.
INSIGHT

Nudify Apps Fuel Child Extortion

  • Non-consensual intimate imagery and 'nudify' apps are producing serious harms, including extortion of children.
  • Hany described attackers using AI to create explicit images, then blackmailing kids into sending real photos.
Get the Snipd Podcast app to discover more snips from this episode
Get the app