The Data Exchange with Ben Lorica

Teaching AI How to Forget

13 snips
Jan 15, 2026
In this engaging discussion, Ben Luria, CEO of Hirundo, dives into the critical concept of machine unlearning for AI. He explains how AI deployments often falter due to risks like bias and PII leakage. Luria emphasizes the necessity of teaching AI to forget undesirable behaviors, contrasting behavioral unlearning with data removal. The conversation also explores practical unlearning workflows, aims for multimodal support, and highlights the potential to safeguard AI models from vulnerabilities like jailbreaks. Luria’s insights illuminate the pathway to safer AI systems.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Surgical Precision Prevents Collateral Damage

  • Naive removal attempts leave breadcrumbs or harm model utility if not surgically precise.
  • Effective unlearning must erase remnants and preserve unrelated model performance.
ANECDOTE

PII In Fine-Tuned Support Models

  • Hirundo ran a proof-of-concept where an enterprise fine-tuned models on customer interactions that included emails.
  • That exposed PII risk and motivated using unlearning to remove specific identifiers without retraining from scratch.
INSIGHT

No Runtime Latency After Unlearning

  • Unlearning produces a new copy of model weights without adding inference latency or runtime compute.
  • This is more efficient at scale than runtime guardrails that increase costs and latency.
Get the Snipd Podcast app to discover more snips from this episode
Get the app