
MLOps.community LinkedIn Recommender System Predictive ML vs LLMs
57 snips
Aug 12, 2025 Arpita Vats, a leading AI researcher specializing in Natural Language Processing and Recommender Systems, dives into the transformative role of LLMs in enhancing recommendation systems. She discusses how these models outpace traditional methods by interpreting user behavior more naturally. The conversation highlights benefits like reduced manual effort alongside challenges such as latency and costs. They explore the evolving landscape of personalized recommendations, including insights into travel recommendations and the nuances of algorithm visibility in social networking.
AI Snips
Chapters
Transcript
Episode notes
Mitigate LLM Latency With Distillation Or Offline Use
- Use lightweight LLMs or distill large models into student models to avoid inference latency in feeds.
- Alternatively run LLMs offline to generate features and use fast traditional models for online ranking.
Eval Criteria Stay The Same
- Evaluation metrics remain user-action driven: did the user like, comment, or engage with recommended items.
- LLMs change model internals but not the external success criteria for recommendations.
Prompts Replace Much Of Feature Design
- With LLMs the core engineering focus shifts from feature design to prompt engineering.
- The prompt quality determines whether the LLM uses the right latent signals for recommendations.
