
Going Deep on Deepfakes (feat. Hany Farid)
The FAIK Files
The Liar's Dividend and Erosion of Trust
Discussion of how synthetic 'slop' lets bad actors dismiss real evidence and erodes trust in institutions.
Welcome back to The FAIK Files!
In this week's episode:
- We sit down with deepfake expert Hany Farid to discuss the real-world harms of synthetic media.
- Exploring the physics of deepfake detection and why real-time streams might be easier to defend.
- The dangers of using AI to "enhance" images and hallucinate hidden details.
- A look at solutions like C2PA, watermarking, and the pressing need for platform accountability.
Check out Hany's company, Get Real Security here: https://getrealsecurity.com
Learn more about C2PA at: https://contentcredentials.org
Want to leave us a voicemail? Here's the magic link to do just that: https://sayhi.chat/FAIK
You can also join our Discord server here: https://faik.to/discord
*** NOTES AND REFERENCES ***
What Keeps Hany Farid Up at Night?:
- The rising harms of non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM) generated by AI.
- Voice cloning being weaponized for individual fraud and real-time deepfakes used by state-sponsored actors.
- Why the specific tool (like Sora or face swap) matters less than the overall threat vector and resulting harm.
Deepfake Detection - APIs vs. Physics:
- Hany's work at UC Berkeley and his company Get Real Security.
- Why detecting real-time manipulated video is actually easier than identifying well-crafted, file-based deepfakes online.
- How physical camera imperfections (noise) differ from the artifacts introduced by AI upsampling and diffusion models.
The Danger of AI "Enhancement":
- Why using AI to "remove a ski mask" or enhance low-res footage is not like CSI—it's just hallucinating statistically consistent pixels.
- AI lacks a notion of uncertainty, leading to dangerous misidentifications and real-world harm.
- The "Liar's Dividend": When the flood of AI slop makes people doubt the authenticity of real, unedited evidence.
Safeguards, C2PA, and Platform Responsibility:
- The role of watermarks and C2PA content credentials acting as "nutrition labels" for digital media.
- Clarifying that C2PA relies on signed credentials and a trust list, not the blockchain.
- The dire need for social media platforms to enforce semantic guardrails and take responsibility for the content they amplify.
- Find more of Hany's work by searching YouTube for his lectures on "Physics-Based Photo Forensics."
*** THE BOILERPLATE ***
About The FAIK Files:
The FAIK Files is an offshoot project from Perry Carpenter's most recent book, FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions.
- Get the Book: FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions (Amazon Associates link)
- Check out the website for more info: https://thisbookisfaik.com
Check out Perry & Mason's other show, the Digital Folklore Podcast:
- Apple Podcasts: https://podcasts.apple.com/us/podcast/digital-folklore/id1657374458
- Spotify: https://open.spotify.com/show/2v1BelkrbSRSkHEP4cYffj?si=u4XTTY4pR4qEqh5zMNSVQA
Want to connect with us? Here's how:
Connect with Perry:
- Perry on LinkedIn: https://www.linkedin.com/in/perrycarpenter
- Perry on X: https://x.com/perrycarpenter
- Perry on BlueSky: https://bsky.app/profile/perrycarpenter.bsky.social
Connect with Mason:
- Mason on LinkedIn: https://www.linkedin.com/in/mason-amadeus-a853a7242/
- Mason on BlueSky: https://bsky.app/profile/wickedinterest.ing
Learn more about your ad choices. Visit megaphone.fm/adchoices


