4.7 Article

Nonparametric Compositional Stochastic Optimization for Risk-Sensitive Kernel Learning

Journal

IEEE TRANSACTIONS ON SIGNAL PROCESSING
Volume 69, Issue -, Pages 428-442

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2020.3046464

Keywords

Kernel; Stochastic processes; Optimization; Complexity theory; Convergence; Training; Random variables; Stochastic optimization; kernel learning; non-parametric estimation; convex optimization; non-convex optimization

Funding

  1. Army Research Laboratory [W911NF-16-2-0008]
  2. DST [MTR/2019/000181]
  3. SMART Scholarship for Service
  4. ARLDSI-TRC Seedling
  5. DCIST CRA

Ask authors/readers for more resources

This work addresses optimization problems with nonlinear objective functions of expected values, introducing a memory-efficient stochastic algorithm COLK for compositional stochastic programs. The tradeoff between complexity of function parameterization and convergence accuracy is provided for both convex and non-convex objectives under constant step-sizes. Experimental results demonstrate COLK's consistent convergence and reliability in risk-sensitive supervised learning tasks, showing a favorable tradeoff between model complexity, convergence, and statistical accuracy for heavy-tailed data distributions.
In this work, we address optimization problems where the objective function is a nonlinear function of an expected value, i.e., compositional stochastic programs. We consider the case where the decision variable is not vector-valued but instead belongs to a Reproducing Kernel Hilbert Space (RKHS), motivated by risk-aware formulations of supervised learning. We develop the first memory-efficient stochastic algorithm for this setting, which we call Compositional Online Learning with Kernels (COLK). COLK, at its core a two time-scale stochastic approximation method, addresses the facts that (i) compositions of expected value problems cannot be addressed by stochastic gradient method due to the presence of an inner expectation; and (ii) the RKHS-induced parameterization has complexity which is proportional to the iteration index which is mitigated through greedily constructed subspace projections. We provide, for the first time, a non-asymptotic tradeoff between the complexity of a function parameterization and its required convergence accuracy for both strongly convex and non-convex objectives under constant step-sizes. Experiments with risk-sensitive supervised learning demonstrate that COLK consistently converges and performs reliably even when data is full of outliers, and thus marks a step towards overfitting. Specifically, we observe a favorable tradeoff between model complexity, consistent convergence, and statistical accuracy for data associated with heavy-tailed distributions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available