Super Data Science: ML & AI Podcast with Jon Krohn cover image

283: Getting The Most Out of Data With Gradient Boosting

Super Data Science: ML & AI Podcast with Jon Krohn

00:00

The Different Implementations of Gradient Boosting

The one that first innovated was xg boost. They basically did something like like a Newton instead of gradient descent. And then another thing that I think actually boosted and have at the beginning, but light gbm had was so if you build a single tree, whatever you want to find a split, you have to basically sort the data. We'll add that the link in the show notes if anybody wants to check it out, get some more details on that. Okay, so that's that's in general gradient boosting. So how is this? You mentioned a couple of them, like light gBM, xg boost and the new one, his gradient boosting. What's the difference

Play episode from 24:53
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app