4.7 Article

Kernel-based online regression with canal loss

Journal

EUROPEAN JOURNAL OF OPERATIONAL RESEARCH
Volume 297, Issue 1, Pages 268-279

Publisher

ELSEVIER
DOI: 10.1016/j.ejor.2021.05.002

Keywords

Data science; Regression; Nonconvex optimization; Regret bound; Noisy label

Funding

  1. National Natural Science Founda-tion of China [61873279]
  2. National Natural Science Foundation of Shandong Province [ZR2019MA016]
  3. Fundamental Research Funds for the Central Universities [20CX05012A]
  4. Major Scientific and Technological Projects of CNPC [ZD2019-183-008]

Ask authors/readers for more resources

The study focuses on a special type of nonconvex loss function in regression and proposes a kernel-based online regression algorithm, NROR, to handle noisy labels. Experimental results show that NROR achieves low prediction errors on datasets with heavy noisy labels.
Typical online learning methods have brought fruitful achievements based on the framework of online convex optimization. Meanwhile, nonconvex loss functions also received numerous attentions for their merits of noise-resiliency and sparsity. Current nonconvex loss functions are typically designed as smooth for the ease of designing the optimization algorithms. However, these loss functions no longer have the property of sparse support vectors. In this work, we focus on regression with a special type of nonconvex loss function (i.e., canal loss), and propose a kernel-based online regression algorithm, n oise-r esilient o nline r egression (NROR), to deal with the noisy labels. The canal loss is a type of horizontally truncated loss and has the merit of sparsity. Although the canal loss is nonconvex and nonsmooth, the regularized canal loss has a property similar to convexity which is called strong pseudo-convexity. Furthermore, the sublinear regret bound of NROR is proved under certain assumptions. Experimental studies show that NROR achieves low prediction errors in terms of mean absolute error and root mean squared error on the datasets of heavy noisy labels. Particularly, we check whether the convergence assumption strictly holds in practice and find that the assumptions required for convergence are rarely violated, and the convergence rate is not affected. (c) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available