4.7 Article

Online Learning With Inexact Proximal Online Gradient Descent Algorithms

期刊

IEEE TRANSACTIONS ON SIGNAL PROCESSING
卷 67, 期 5, 页码 1338-1352

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2018.2890368

关键词

Dynamic regret; gradient descent; online convex optimization; subspace tracking

资金

  1. IndoUS Science and Technology Forum (IUSSTF) Grant [IUSSTF/2018400]

向作者/读者索取更多资源

We consider nondifferentiable dynamic optimization problems such as those arising in robotics and subspace tracking. Given the computational constraints and the time-varying nature of the problem, a low-complexity algorithm is desirable, while the accuracy of the solution may only increase slowly over time. We put forth the proximal online gradient descent (OGD) algorithm for tracking the optimum of a composite objective function comprising of a differentiable loss function and a nondifferentiable regularizer. An online learning framework is considered and the gradient of the loss function is allowed to be erroneous. Both, the gradient error as well as the dynamics of the function optimum or target are adversarial and the performance of the inexact proximal OGD is characterized in terms of its dynamic regret, expressed in terms of the cumulative error and path length of the target. The proposed inexact proximal OGD is generalized for application to large-scale problems where the loss function has a finite sum structure. In such cases, evaluation of the full gradient may not be viable and a variance reduced version is proposed that allows the component functions to be subsampled. The efficacy of the proposed algorithms is tested on the problem of formation control in robotics and on the dynamic foreground-background separation problem in video.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据