The Information Bottleneck

EP25: Personalization, Data, and the Chaos of Fine-Tuning with Fred Sala (UW-Madison / Snorkel AI)

18 snips
Feb 17, 2026
Fred Sala, Assistant Professor at UW–Madison and Chief Scientist at Snorkel AI, works on data-centric AI and weak supervision. He discusses why personalization is the next frontier for LLMs. Short takes cover security risks from personal agents, why prompting fails at scale, activation-steering like REFT as an efficient personalization path, self-distillation for continual learning, and why high-quality data still beats fancy architecture.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

OpenClaw's Viral Personal-Agent Stories

  • OpenClaw (aka Molt) is a personal assistant people run on local machines to access iMessage and system actions.
  • Fred Sala and hosts observed viral agent social posts and worrying interactions with children and family contexts.
ADVICE

Don't Run Agents As Root

  • Avoid running agent frameworks with root or open sandbox overrides like is_sandbox=1.
  • Fred warns that prompt injection or ordinary hacking can then delete or compromise your machine.
INSIGHT

Self-Distillation Enables Continual Learning

  • Self-distillation enables on-policy continual learning without explicit RL rewards.
  • Fred and hosts note it reduces catastrophic forgetting and speeds learning by iteratively training models on their own generated answers.
Get the Snipd Podcast app to discover more snips from this episode
Get the app