4.7 Article

Nonparametric Compositional Stochastic Optimization for Risk-Sensitive Kernel Learning

期刊

IEEE TRANSACTIONS ON SIGNAL PROCESSING
卷 69, 期 -, 页码 428-442

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2020.3046464

关键词

Kernel; Stochastic processes; Optimization; Complexity theory; Convergence; Training; Random variables; Stochastic optimization; kernel learning; non-parametric estimation; convex optimization; non-convex optimization

资金

  1. Army Research Laboratory [W911NF-16-2-0008]
  2. DST [MTR/2019/000181]
  3. SMART Scholarship for Service
  4. ARLDSI-TRC Seedling
  5. DCIST CRA

向作者/读者索取更多资源

This work addresses optimization problems with nonlinear objective functions of expected values, introducing a memory-efficient stochastic algorithm COLK for compositional stochastic programs. The tradeoff between complexity of function parameterization and convergence accuracy is provided for both convex and non-convex objectives under constant step-sizes. Experimental results demonstrate COLK's consistent convergence and reliability in risk-sensitive supervised learning tasks, showing a favorable tradeoff between model complexity, convergence, and statistical accuracy for heavy-tailed data distributions.
In this work, we address optimization problems where the objective function is a nonlinear function of an expected value, i.e., compositional stochastic programs. We consider the case where the decision variable is not vector-valued but instead belongs to a Reproducing Kernel Hilbert Space (RKHS), motivated by risk-aware formulations of supervised learning. We develop the first memory-efficient stochastic algorithm for this setting, which we call Compositional Online Learning with Kernels (COLK). COLK, at its core a two time-scale stochastic approximation method, addresses the facts that (i) compositions of expected value problems cannot be addressed by stochastic gradient method due to the presence of an inner expectation; and (ii) the RKHS-induced parameterization has complexity which is proportional to the iteration index which is mitigated through greedily constructed subspace projections. We provide, for the first time, a non-asymptotic tradeoff between the complexity of a function parameterization and its required convergence accuracy for both strongly convex and non-convex objectives under constant step-sizes. Experiments with risk-sensitive supervised learning demonstrate that COLK consistently converges and performs reliably even when data is full of outliers, and thus marks a step towards overfitting. Specifically, we observe a favorable tradeoff between model complexity, consistent convergence, and statistical accuracy for data associated with heavy-tailed distributions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据