Roller screw

From formulasearchengine
Revision as of 11:47, 31 October 2013 by en>Wallnut tree (plural is aircraft)
Jump to navigation Jump to search

Gradient boosting is a machine learning technique for regression problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in a stage-wise fashion like other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function. The gradient boosting method can also be used for classification problems by reducing them to regression with a suitable loss function.

The method was invented by Jerome H. Friedman in 1999 and was published in a series of two papers, the first of which[1] introduced the method, and the second one[2] described an important tweak to the algorithm, which improves its accuracy and performance.

Gradient boosting

In many supervised learning problems one has an output variable y and a vector of input variables x connected together via a joint probability distribution P(x, y). Using a training set (x1,y1),,(xn,yn) of known values of x and corresponding values of y, the goal is to find an approximation F^(x) to a function F*(x) that minimizes the expected value of some specified loss function L(y,F(x)):

F*=argminFEx,yL(y,F(x)).

Gradient boosting method assumes a real-valued y and seeks an approximation F^(x) in the form of a weighted sum of functions hi(x) from some class , called base (or weak) learners:

F(x)=i=1Mγihi(x)+const.

In accordance with the empirical risk minimization principle, the method tries to find an approximation F^(x) that minimizes the average value of the loss function on the training set. It does so by starting with a model, consisting of a constant function F0(x), and incrementally expanding it in a greedy fashion:

F0(x)=argminγi=1nL(yi,γ),
Fm(x)=Fm1(x)+argminfi=1nL(yi,Fm1(xi)+f(xi)),

where f is restricted to be a function from the class of base learner functions.

However, the problem of choosing at each step the best f for an arbitrary loss function L is a hard optimization problem in general, and so we'll "cheat" by solving a much easier problem instead.

The idea is to apply a steepest descent step to this minimization problem. If we only cared about predictions at the points of the training set, and f were unrestricted, we'd update the model per the following equation, where we view L(y, f) not as a functional of f, but as a function of a vector of values f(x1),,f(xn):

Fm(x)=Fm1(x)γmi=1nfL(yi,Fm1(xi)),
γm=argminγi=1nL(yi,Fm1(xi)γL(yi,Fm1(xi))f(xi)).

But as f must come from a restricted class of functions (that's what allows us to generalize), we'll just choose the one that most closely approximates the gradient of L. Having chosen f, the multiplier γ is then selected using line search just as shown in the second equation above.

In pseudocode, the generic gradient boosting method is:[1][3] Template:Framebox Input: training set {(xi,yi)}i=1n, a differentiable loss function L(y,F(x)), number of iterations M.

Algorithm:

  1. Initialize model with a constant value:
    F0(x)=argminγi=1nL(yi,γ).
  2. For m = 1 to M:
    1. Compute so-called pseudo-residuals:
      rim=[L(yi,F(xi))F(xi)]F(x)=Fm1(x)for i=1,,n.
    2. Fit a base learner hm(x) to pseudo-residuals, i.e. train it using the training set {(xi,rim)}i=1n.
    3. Compute multiplier γm by solving the following one-dimensional optimization problem:
      γm=argminγi=1nL(yi,Fm1(xi)+γhm(xi)).
    4. Update the model:
      Fm(x)=Fm1(x)+γmhm(x).
  3. Output FM(x).

Template:Frame-footer

Gradient tree boosting

Gradient boosting is typically used with decision trees (especially CART trees) of a fixed size as base learners. For this special case Friedman proposes a modification to gradient boosting method which improves the quality of fit of each base learner.

Generic gradient boosting at the m-th step would fit a decision tree hm(x) to pseudo-residuals. Let J be the number of its leaves. The tree partitions the input space into J disjoint regions R1m,,RJm and predicts a constant value in each region. Using the indicator notation, the output of hm(x) for input x can be written as the sum:

hm(x)=j=1JbjmI(xRjm),

where bjm is the value predicted in the region Rjm.[4]

Then the coefficients bjm are multiplied by some value γm, chosen using line search so as to minimize the loss function, and the model is updated as follows:

Fm(x)=Fm1(x)+γmhm(x),γm=argminγi=1nL(yi,Fm1(xi)+γhm(xi)).

Friedman proposes to modify this algorithm so that it chooses a separate optimal value γjm for each of the tree's regions, instead of a single γm for the whole tree. He calls the modified algorithm "TreeBoost". The coefficients bjm from the tree-fitting procedure can be then simply discarded and the model update rule becomes:

Fm(x)=Fm1(x)+j=1JγjmI(xRjm),γjm=argminγxiRjmL(yi,Fm1(xi)+γhm(xi)).

Size of trees

J, the number of terminal nodes in trees, is the method's parameter which can be adjusted for a data set at hand. It controls the maximum allowed level of interaction between variables in the model. With J=2 (decision stumps), no interaction between variables is allowed. With J=3 the model may include effects of the interaction between up to two variables, and so on.

Hastie et al.[3] comment that typically 4J8 work well for boosting and results are fairly insensitive to the choice of J in this range, J=2 is insufficient for many applications, and J>10 is unlikely to be required.

Regularization

Fitting the training set too closely can lead to degradation of the model's generalization ability. Several so-called regularization techniques reduce this overfitting effect by constraining the fitting procedure.

One natural regularization parameter is the number of gradient boosting iterations M (i.e. the number of trees in the model when the base learner is a decision tree). Increasing M reduces the error on training set, but setting it too high may lead to overfitting. An optimal value of M is often selected by monitoring prediction error on a separate validation data set. Besides controlling M, several other regularization techniques are used.

Shrinkage

An important part of gradient boosting method is regularization by shrinkage which consists in modifying the update rule as follows:

Fm(x)=Fm1(x)+νγmhm(x),0<ν1,

where parameter ν is called the "learning rate".

Empirically it has been found that using small learning rates (such as ν<0.1) yields dramatic improvements in model's generalization ability over gradient boosting without shrinking (ν=1).[3] However, it comes at the price of increasing computational time both during training and querying: lower learning rate requires more iterations.

Stochastic gradient boosting

Soon after the introduction of gradient boosting Friedman proposed a minor modification to the algorithm, motivated by Breiman's bagging method.[2] Specifically, he proposed that at each iteration of the algorithm, a base learner should be fit on a subsample of the training set drawn at random without replacement.[5] Friedman observed a substantial improvement in gradient boosting's accuracy with this modification.

Subsample size is some constant fraction f of the size of the training set. When f = 1, the algorithm is deterministic and identical to the one described above. Smaller values of f introduce randomness into the algorithm and help prevent overfitting, acting as a kind of regularization. The algorithm also becomes faster, because regression trees have to be fit to smaller datasets at each iteration. Friedman[2] obtained that 0.5f0.8 leads to good results for small and moderate sized training sets. Therefore, f is typically set to 0.5, meaning that one half of the training set is used to build each base learner.

Also, like in bagging, subsampling allows one to define an out-of-bag estimate of the prediction performance improvement by evaluating predictions on those observations which were not used in the building of the next base learner. Out-of-bag estimates help avoid the need for an independent validation dataset, but often underestimate actual performance improvement and the optimal number of iterations.[6]

Number of observations in leaves

Gradient tree boosting implementations often also use regularization by limiting the minimum number of observations in trees' terminal nodes (this parameter is called n.minobsinnode in the R gbm package[6]). It's used in the tree building process by ignoring any splits that lead to nodes containing fewer than this number of training set instances.

Imposing this limit helps to reduce variance in predictions at leaves.

Usage

Recently, gradient boosting has gained some popularity in the field of learning to rank. The commercial web search engines Yahoo[7] and Yandex[8] use variants of gradient boosting in their machine-learned ranking engines.

Names

The method goes by a wide variety of names. The title of the original publication[1] refers to it as a "Gradient Boosting Machine" (GBM). That same publication and a later one[2] by J. Friedman also use the names "Gradient Boost", "Stochastic Gradient Boosting" (emphasizing the random subsampling technique), "Gradient Tree Boosting" and "TreeBoost" (for specialization of the method to the case of decision trees as base learners.)

A popular open-source implementation[6] for R calls it "Generalized Boosting Model". Sometimes the method is referred to as "functional gradient boosting", "Gradient Boosted Models" and its tree version is also called "Gradient Boosted Decision Trees" (GBDT) or "Gradient Boosted Regression Trees" (GBRT). Commercial implementations from Salford Systems use the names "Multiple Additive Regression Trees" (MART) and TreeNet, both trademarked.

Implementations

Open-source
Proprietary
  • TreeNet is a commercial implementation from Salford Systems, possibly "equipped with patent-pending extensions."
  • DTREG TreeBoost
  • An implementation of stochastic gradient boosting is available in STATISTICA.
  • Yahoo and Google have published papers describing an MPI- and a MapReduce-based parallel implementations of gradient boosting.[10][11] However, they have not made the code publicly available.

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

  1. 1.0 1.1 1.2 Friedman, J. H. "Greedy Function Approximation: A Gradient Boosting Machine." (February 1999)
  2. 2.0 2.1 2.2 2.3 Friedman, J. H. "Stochastic Gradient Boosting." (March 1999) Cite error: Invalid <ref> tag; name "Friedman1999b" defined multiple times with different content
  3. 3.0 3.1 3.2 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  4. Note: in case of usual CART trees, the trees are fitted using least-squares loss, and so the coefficient bjm for the region Rjm is equal to just the value of output variable, averaged over all training instances in Rjm.
  5. Note that this is different from bagging, which samples with replacement because it uses samples of the same size as the training set.
  6. 6.0 6.1 6.2 6.3 Ridgeway, Greg (2007). Generalized Boosted Models: A guide to the gbm package.
  7. Cossock, David and Zhang, Tong (2008). Statistical Analysis of Bayes Optimal Subset Ranking, page 14.
  8. Yandex corporate blog entry about new ranking model "Snezhinsk" (in Russian)
  9. OpenCV change logs
  10. B. Panda, et al. (2009). PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce.
  11. Jerry Ye, et al. (2009). Stochastic gradient boosted distributed decision trees.