A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning

Gradient boosting is one of the most powerful techniques for building predictive models.
In this post you will discover the gradient boosting machine learning algorithm and get a gentle introduction into where it came from and how it works.
After reading this post, you will know:
The origin of boosting from learning theory and AdaBoost.
How gradient boosting works including the loss function, weak learners and the additive model.
How to improve performance over the base algorithm with various regularization schemes

Let’s get started.

A Gentle Introduction to the Gradient Boosting Algorithm for Machine LearningPhoto by brando.n, some rights reserved.

The Algorithm that is Winning Competitions...XGBoost for fast gradient boosting

XGBoost is the high performance implementation of gradient boosting that you can now access directly in Python.
Your PDF Download and Email Course.
FREE 7-Day Mini-Course on XGBoost With Python
Download Your FREE Mini-Course
Download your PDF containing all 7 lessons.
Daily lesson via email with tips and tricks.

The Origin of Boosting
The idea of boosting came out of the idea of whether a weak learner can be modified to become better.
Michael Kearns articulated the goal as the “Hypothesis Boosting Problem” stating the goal from a practical standpoint as:
… an efficient algorithm for converting relatively poor hypotheses into very good hypotheses

Thoughts on Hypothesis Boosting [PDF], 1988
A weak hypothesis or weak learner is defined as one whose performance is at least slightly better than random chance.
These ideas built upon Leslie Valiant’s work on distribution free or Probability Approximately Correct (PAC) learning, a framework for investigating the complexity of machine learning problems.
Hypothesis boosting was the idea of filtering observations, leaving those observations that the weak learner can handle and focusing on developing new weak learns to handle the remaining difficult observations.
The idea is to used the weak learning method several times to get a succession of hypotheses, each one refocused on the examples that the previous ones found difficult and misclassified. … Note, however, it is not obvious at all how this can be done

Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World, page 152, 2013
AdaBoost the First Boosting Algorithm
The first realization of boosting that saw great success in application was Adaptive Boosting or AdaBoost for short.
Boosting refers to this general problem of producing a very accurate prediction rule by combining rough and moderately inaccurate rules-of-thumb.

A decision-theoretic generalization of on-line learning and an application to boosting [PDF], 1995
The weak learners in AdaBoost are decision trees with a single split, called decision stumps for their shortness.
AdaBoost works by weighting the observations, putting more weight on difficult to classify instances and less on those already handled well. New weak learners are added sequentially that focus their training on the more difficult patterns.
This means that samples that are difficult to classify receive increasing larger weights until the algorithm identifies a model that correctly classifies these samples

Applied Predictive Modeling, 2013
Predictions are made by majority vote of the weak learners’ predictions, weighted by their individual accuracy. The most successful form of the AdaBoost algorithm was for binary classification problems and was called AdaBoost.M1.
You can learn more about the AdaBoost algorithm in the post:
Boosting and AdaBoost for Machine Learning.

Generalization of AdaBoost as Gradient Boosting
AdaBoost and related algorithms were recast in a statistical framework first by Breiman calling them ARCing algorithms.
Arcing is an acronym for Adaptive Reweighting and Combining. Each step in an arcing algorithm consists of a weighted minimization followed by a recomputation of [the classifiers] and [weighted input].

Prediction Games and Arching Algorithms [PDF], 1997
This framework was further developed by Friedman and called Gradient Boosting Machines. Later called just gradient boosting or gradient tree boosting.
The statistical framework cast boosting as a numerical optimization problem where the objective is to minimize the loss of the model by adding weak learners using a gradient descent like procedure.
This class of algorithms were described as a stage-wise additive model. This is because one new weak learner is added at a time and existing weak learners in the model are frozen and left unchanged.
Note that this stagewise strategy is different from stepwise approaches that readjust previously entered terms when new ones are added.

Greedy Function Approximation: A Gradient Boosting Machine [PDF], 1999
The generalization allowed arbitrary differentiable loss functions to be used, expanding the technique beyond binary classification problems to support regression, multi-class classification and more.
How Gradient Boosting Works
Gradient boosting involves three elements:
A loss function to be optimized.
A weak learner to make predictions.
An additive model to add weak learners to minimize the loss function.

  1. Loss Function
    The loss function used depends on the type of problem being solved.
    It must be differentiable, but many standard loss functions are supported and you can define your own.
    For example, regression may use a squared error and classification may use logarithmic loss.
    A benefit of the gradient boosting framework is that a new boosting algorithm does not have to be derived for each loss function that may want to be used, instead, it is a generic enough framework that any differentiable loss function can be used.
  2. Weak Learner
    Decision trees are used as the weak learner in gradient boosting.
    Specifically regression trees are used that output real values for splits and whose output can be added together, allowing subsequent models outputs to be added and “correct” the residuals in the predictions.
    Trees are constructed in a greedy manner, choosing the best split points based on purity scores like Gini or to minimize the loss.
    Initially, such as in the case of AdaBoost, very short decision trees were used that only had a single split, called a decision stump. Larger trees can be used generally with 4-to-8 levels.
    It is common to constrain the weak learners in specific ways, such as a maximum number of layers, nodes, splits or leaf nodes.
    This is to ensure that the learners remain weak, but can still be constructed in a greedy manner.
  3. Additive Model
    Trees are added one at a time, and existing trees in the model are not changed.
    A gradient descent procedure is used to minimize the loss when adding trees.
    Traditionally, gradient descent is used to minimize a set of parameters, such as the coefficients in a regression equation or weights in a neural network. After calculating error or loss, the weights are updated to minimize that error.
    Instead of parameters, we have weak learner sub-models or more specifically decision trees. After calculating the loss, to perform the gradient descent procedure, we must add a tree to the model that reduces the loss (i.e. follow the gradient). We do this by parameterizing the tree, then modify the parameters of the tree and move in the right direction by (reducing the residual loss.
    Generally this approach is called functional gradient descent or gradient descent with functions.
    One way to produce a weighted combination of classifiers which optimizes [the cost] is by gradient descent in function space

Boosting Algorithms as Gradient Descent in Function Space [PDF], 1999

The output for the new tree is then added to the output of the existing sequence of trees in an effort to correct or improve the final output of the model.
A fixed number of trees are added or training stops once loss reaches an acceptable level or no longer improves on an external validation dataset.
Improvements to Basic Gradient Boosting
Gradient boosting is a greedy algorithm and can overfit a training dataset quickly.
It can benefit from regularization methods that penalize various parts of the algorithm and generally improve the performance of the algorithm by reducing overfitting.
In this this section we will look at 4 enhancements to basic gradient boosting:
Tree Constraints
Shrinkage
Random sampling
Penalized Learning

  1. Tree Constraints
    It is important that the weak learners have skill but remain weak.
    There are a number of ways that the trees can be constrained.
    A good general heuristic is that the more constrained tree creation is, the more trees you will need in the model, and the reverse, where less constrained individual trees, the fewer trees that will be required.
    Below are some constraints that can be imposed on the construction of decision trees:
    Number of trees, generally adding more trees to the model can be very slow to overfit. The advice is to keep adding trees until no further improvement is observed.
    Tree depth, deeper trees are more complex trees and shorter trees are preferred. Generally, better results are seen with 4-8 levels.
    Number of nodes or number of leaves, like depth, this can constrain the size of the tree, but is not constrained to a symmetrical structure if other constraints are used.
    Number of observations per split imposes a minimum constraint on the amount of training data at a training node before a split can be considered
    Minimim improvement to loss is a constraint on the improvement of any split added to a tree.

  2. Weighted Updates
    The predictions of each tree are added together sequentially.
    The contribution of each tree to this sum can be weighted to slow down the learning by the algorithm. This weighting is called a shrinkage or a learning rate.
    Each update is simply scaled by the value of the “l(fā)earning rate parameter v”

Greedy Function Approximation: A Gradient Boosting Machine [PDF], 1999
The effect is that learning is slowed down, in turn require more trees to be added to the model, in turn taking longer to train, providing a configuration trade-off between the number of trees and learning rate.
Decreasing the value of v [the learning rate] increases the best value for M [the number of trees].

Greedy Function Approximation: A Gradient Boosting Machine [PDF], 1999
It is common to have small values in the range of 0.1 to 0.3, as well as values less than 0.1.
Similar to a learning rate in stochastic optimization, shrinkage reduces the influence of each individual tree and leaves space for future trees to improve the model.

Stochastic Gradient Boosting [PDF], 1999

  1. Stochastic Gradient Boosting
    A big insight into bagging ensembles and random forest was allowing trees to be greedily created from subsamples of the training dataset.
    This same benefit can be used to reduce the correlation between the trees in the sequence in gradient boosting models.
    This variation of boosting is called stochastic gradient boosting.
    at each iteration a subsample of the training data is drawn at random (without replacement) from the full training dataset. The randomly selected subsample is then used, instead of the full sample, to fit the base learner.

Stochastic Gradient Boosting [PDF], 1999
A few variants of stochastic boosting that can be used:
Subsample rows before creating each tree.
Subsample columns before creating each tree
Subsample columns before considering each split.

Generally, aggressive sub-sampling such as selecting only 50% of the data has shown to be beneficial.
According to user feedback, using column sub-sampling prevents over-fitting even more so than the traditional row sub-sampling

XGBoost: A Scalable Tree Boosting System, 2016

  1. Penalized Gradient Boosting
    Additional constraints can be imposed on the parameterized trees in addition to their structure.
    Classical decision trees like CART are not used as weak learners, instead a modified form called a regression tree is used that has numeric values in the leaf nodes (also called terminal nodes). The values in the leaves of the trees can be called weights in some literature.
    As such, the leaf weight values of the trees can be regularized using popular regularization functions, such as:
    L1 regularization of weights.
    L2 regularization of weights.

The additional regularization term helps to smooth the final learnt weights to avoid over-fitting. Intuitively, the regularized objective will tend to select a model employing simple and predictive functions.

XGBoost: A Scalable Tree Boosting System, 2016
Gradient Boosting Resources
Gradient boosting is a fascinating algorithm and I am sure you want to go deeper.
This section lists various resources that you can use to learn more about the gradient boosting algorithm.
Gradient Boosting Videos
Gradient Boosting Machine Learning, Trevor Hastie, 2014
Gradient Boosting, Alexander Ihler, 2012
GBM, John Mount, 2015
Learning: Boosting, MIT 6.034 Artificial Intelligence, 2010
xgboost: An R package for Fast and Accurate Gradient Boosting, 2016
XGBoost: A Scalable Tree Boosting System, Tianqi Chen, 2016

Gradient Boosting in Textbooks
Section 8.2.3 Boosting, page 321, An Introduction to Statistical Learning: with Applications in R.
Section 8.6 Boosting, page 203, Applied Predictive Modeling.
Section 14.5 Stochastic Gradient Boosting, page 390,Applied Predictive Modeling.
Section 16.4 Boosting, page 556, Machine Learning: A Probabilistic Perspective
Chapter 10 Boosting and Additive Trees, page 337, The Elements of Statistical Learning: Data Mining, Inference, and Prediction

Gradient Boosting Papers
Thoughts on Hypothesis Boosting [PDF], Michael Kearns, 1988
A decision-theoretic generalization of on-line learning and an application to boosting [PDF], 1995
Arcing the edge [PDF], 1998
Stochastic Gradient Boosting [PDF], 1999
Boosting Algorithms as Gradient Descent in Function Space [PDF], 1999

Gradient Boosting Slides
Introduction to Boosted Trees, 2014
A Gentle Introduction to Gradient Boosting, Cheng Li

Gradient Boosting Web Pages
Boosting (machine learning)
Gradient boosting
Gradient Tree Boosting in scikit-learn

Want to Systematically Learn How To Use XGBoost?


You can develop and evaluate XGBoost models in just a few lines of Python code. You need:
>> XGBoost With Python
Take the next step with 15 self-study tutorial lessons.
Covers building large models on Amazon Web Services, feature importance, tree visualization, hyperparameter tuning, and much more...
Ideal for machine learning practitioners already familiar with the Python ecosystem.
Bring XGBoost To Your Machine Learning Projects

Summary
In this post you discovered the gradient boosting algorithm for predictive modeling in machine learning.
Specifically you learned:
The history of boosting in learning theory and AdaBoost.
How the gradient boosting algorithm works with a loss function, weak learners and an additive model.
How to improve the performance of gradient boosting with regularization.

Do you have any questions about the gradient boosting algorithm or about this post? Ask your questions in the comments and I will do my best to answer.

About Jason Brownlee
Jason is the editor-in-chief at MachineLearningMastery.com. He is a husband, proud father, academic researcher, author, professional developer and a machine learning practitioner. He has a Masters and PhD in Artificial Intelligence, has published books on Machine Learning and has written operational code that is running in production. Learn more.View all posts by Jason Brownlee →

How to Tune the Number and Size of Decision Trees with XGBoost in Python
How to Configure the Gradient Boosting Algorithm

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 230,791評論 6 545
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 99,795評論 3 429
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 178,943評論 0 384
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經(jīng)常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 64,057評論 1 318
  • 正文 為了忘掉前任,我火速辦了婚禮,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 72,773評論 6 414
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 56,106評論 1 330
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 44,082評論 3 450
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 43,282評論 0 291
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 49,793評論 1 338
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 41,507評論 3 361
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 43,741評論 1 375
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 39,220評論 5 365
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 44,929評論 3 351
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 35,325評論 0 28
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 36,661評論 1 296
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 52,482評論 3 400
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 48,702評論 2 380

推薦閱讀更多精彩內(nèi)容

  • 荒蕪,一滴創(chuàng)世紀水 生命之心 千年,蔓延繁花似錦 又千年,風(fēng)吹蝕心 而今,屹立 塵埃落定也不能掩埋痕跡 傲立,留下...
    Twilightfaith閱讀 236評論 4 7
  • 文|普陀鵝耳壢 疤叔四十歲左右的樣子,一米八幾的個子,高大魁梧,濃眉大眼,最引人注意的是右額頭上那道如八字一撇的疤...
    香山小葉閱讀 646評論 0 0
  • 這是我第一次沒有把銷售的課程當作銷售來聽,更多是內(nèi)在的變化,人若只是局限在外表的改變,你很難有決...
    分界線17612閱讀 147評論 0 0
  • 今天娃爸把家里的衛(wèi)生整個做了遍,差不多大半年沒住的房子,想像一下有多么亂,多么臟。剛開始還以為他只是一時興起的做,...
    卡伊木門歐恵爾地板閱讀 163評論 0 0