
The Data Exchange with Ben Lorica Teaching AI How to Forget
13 snips
Jan 15, 2026 In this engaging discussion, Ben Luria, CEO of Hirundo, dives into the critical concept of machine unlearning for AI. He explains how AI deployments often falter due to risks like bias and PII leakage. Luria emphasizes the necessity of teaching AI to forget undesirable behaviors, contrasting behavioral unlearning with data removal. The conversation also explores practical unlearning workflows, aims for multimodal support, and highlights the potential to safeguard AI models from vulnerabilities like jailbreaks. Luria’s insights illuminate the pathway to safer AI systems.
AI Snips
Chapters
Transcript
Episode notes
Surgical Precision Prevents Collateral Damage
- Naive removal attempts leave breadcrumbs or harm model utility if not surgically precise.
- Effective unlearning must erase remnants and preserve unrelated model performance.
PII In Fine-Tuned Support Models
- Hirundo ran a proof-of-concept where an enterprise fine-tuned models on customer interactions that included emails.
- That exposed PII risk and motivated using unlearning to remove specific identifiers without retraining from scratch.
No Runtime Latency After Unlearning
- Unlearning produces a new copy of model weights without adding inference latency or runtime compute.
- This is more efficient at scale than runtime guardrails that increase costs and latency.
