
Illustrating Reinforcement Learning from Human Feedback (RLHF)
BlueDot Narrated
00:00
Fine-tuning the policy with RL
Perrin Walker frames the LM policy, action space, and using PPO with a KL penalty to optimize reward.
Play episode from 08:50
Transcript


