TechCrunch Daily Crunch

Meet the Facebook insider who has been building content moderation for the AI era.

Apr 4, 2026
A deep dive into the messy reality of content moderation beyond tech. Tales of overburdened human reviewers, poor translations, and quick, inaccurate decisions. How AI chatbots have amplified safety risks and evaded filters. A new startup encoding policy into executable logic for real-time moderation and steering risky conversations toward safer paths.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Human Review Limits Broke Moderation Accuracy

  • Content moderation problems at Facebook were deeper than technology and involved poor reviewer workflows and policy translation.
  • Human reviewers memorized a 40-page machine-translated policy and had ~30 seconds per flagged item, yielding just over 50% accuracy.
ANECDOTE

Founder Pivot From Apple to Building Moonbounce

  • Levinson left Apple for Facebook expecting tech would fix moderation but found processes failed under adversarial misuse and AI-era speed.
  • That led him to found Moonbounce to convert policy into executable logic tied to enforcement.
INSIGHT

Policy As Code Enables Real-Time Enforcement

  • Policy as code turns static rules into executable, updatable logic that can act at runtime.
  • Moonbounce runs an LLM reading a customer's policies and evaluates content in under 300 milliseconds to slow, block, or queue items.
Get the Snipd Podcast app to discover more snips from this episode
Get the app