4.6 Article

A NEW PERSPECTIVE ON BOOSTING IN LINEAR REGRESSION VIA SUBGRADIENT OPTIMIZATION AND RELATIVES

Journal

ANNALS OF STATISTICS
Volume 45, Issue 6, Pages 2328-2364

Publisher

INST MATHEMATICAL STATISTICS
DOI: 10.1214/16-AOS1505

Keywords

Boosting; forward stagewise regression; linear regression; bias-variance tradeoff; shrinkage; convex optimization; computational guarantees

Funding

  1. MIT-Chile-Pontificia Universidad Catolica de Chile Seed Fund
  2. NSF
  3. ONR [N000141512342]
  4. Betty Moore-Sloan foundation
  5. AFOSR [FA9550-11-1-0141]
  6. U.S. Department of Defense (DOD) [N000141512342] Funding Source: U.S. Department of Defense (DOD)

Ask authors/readers for more resources

We analyze boosting algorithms [Ann. Statist. 29 (2001) 1189-1232; Ann. Statist. 28 (2000) 337-407; Ann. Statist. 32 (2004) 407-499] in linear regression from a new perspective: that of modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression, namely the incremental forward stagewise algorithm (FS epsilon) and least squares boosting [LS-BOOST(epsilon)], can be viewed as subgradient descent to minimize the loss function defined as the maximum absolute correlation between the features and residuals. We also propose a minor modification of FS epsilon that yields an algorithm for the LASSO, and that may be easily extended to an algorithm that computes the LASSO path for different values of the regularization parameter. Furthermore, we show that these new algorithms for the LASSO may also be interpreted as the same master algorithm (subgradient descent), applied to a regularized version of the maximum absolute correlation loss function. We derive novel, comprehensive computational guarantees for several boosting algorithms in linear regression (including LS-BOOST(epsilon) and FS epsilon) by using techniques of first-order methods in convex optimization. Our computational guarantees inform us about the statistical properties of boosting algorithms. In particular, they provide, for the first time, a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm with a prespecified learning rate for a fixed but arbitrary number of iterations, for any dataset.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available