BlueDot Narrated cover image

Illustrating Reinforcement Learning from Human Feedback (RLHF)

BlueDot Narrated

00:00

Fine-tuning the policy with RL

Perrin Walker frames the LM policy, action space, and using PPO with a KL penalty to optimize reward.

Play episode from 08:50
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app