
Data Brew by Databricks Reward Models | Data Brew | Episode 40
12 snips
Mar 20, 2025 Brandon Cui, a Research Scientist at MosaicML and Databricks, specializes in AI model optimization and leads RLHF efforts. In this discussion, he unveils how synthetic data and RLHF can fine-tune models for better outcomes. He explores techniques like Policy Proximal Optimization and Direct Preference Optimization that enhance model responses. Brandon also emphasizes the critical role of reward models in boosting performance in coding, math, and reasoning tasks, while highlighting the necessity of human oversight in AI training.
AI Snips
Chapters
Transcript
Episode notes
Reward Model Purpose
- Reward models excel at scoring generations based on criteria like helpfulness and safety.
- They determine if content fits user needs, enabling quality assessment.
Training Reward Models
- Train reward models using pairwise comparisons: present two responses for a prompt and indicate preference.
- Gather ample preference data to train the model, scoring chosen responses higher.
RLHF Beyond Safety
- RLHF isn't just for safety; it improves reasoning, math, and data understanding.
- Large language models benefit from reward models to enhance these capabilities.
