Data Skeptic

Disentanglement and Interpretability in Recommender Systems

13 snips
Mar 10, 2026
Erwin Dervishai, a PhD student at the University of Copenhagen who studies representation learning and recommender systems. He explores what disentanglement means for learned embeddings. He discusses methods for interpreting embeddings, reproducibility challenges, trade-offs between interpretability and accuracy, using metadata and LLMs for denoising, and practical ideas for user control.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Representation Learning Replaces Manual Features

  • Representation learning replaces manual feature engineering by letting models learn vectors that summarize inputs for downstream tasks.
  • Erwin Dervishai contrasts unsupervised clustering-style learning with supervised learning where labels guide the internal representation formation.
INSIGHT

Disentanglement Separates Independent Factors

  • Disentanglement aims to make learned representation dimensions correspond to independent factors like size or price.
  • Erwin gives a t-shirt example: changing size should not change price if factors are disentangled.
ADVICE

Use Quantitative Disentanglement Metrics

  • Evaluate disentanglement quantitatively using established metrics like disentanglement and completeness instead of only qualitative inspection.
  • The authors collected prior models and datasets and applied these metrics in a reproducibility study.
Get the Snipd Podcast app to discover more snips from this episode
Get the app