Science Quickly

A tech journalist, some hot dogs and an AI hoax

Mar 4, 2026
Thomas Germain, a BBC tech reporter who exposed AI quirks, shares how he tricked ChatGPT and Google into repeating a fake hot-dog fame stunt. He describes how fast AI propagated the claim, why bad actors game AI summaries, and humorous examples of AI hallucinations. The conversation covers source opacity, regulation questions, and practical tips to avoid being misled by AI.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Jokey Blog Post Fooled ChatGPT And Google AI

  • Thomas Germain published a jokey blog post claiming he was the world's best tech journalist at eating hot dogs to test AI propagation.
  • Within 24 hours ChatGPT and Google AI were repeating his post as fact, showing how quickly fabricated content can spread.
INSIGHT

AI Summaries Amplify Promotional Content As Authority

  • AI summaries often present promotional or self-published content as authoritative without the context of bias.
  • Germain found cases where medical reviews and financial recommendations came from fake or self-promotional studies and company blog posts.
INSIGHT

AI Answers Reduce Clickthroughs And Increase Blind Trust

  • Users often accept AI-provided answers without clicking source links, reducing scrutiny and web traffic to original sites.
  • Google reported up to a 70% drop in traffic for some searches since AI overviews rolled out, letting AI become the de facto source.
Get the Snipd Podcast app to discover more snips from this episode
Get the app