4.5 Article

Time-varying dual accelerated gradient ascent: A fast network optimization algorithm

期刊

出版社

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jpdc.2022.03.014

关键词

Distributed convex optimization; Time-varying network optimization

资金

  1. Sharif University of Technology

向作者/读者索取更多资源

This study proposes a time-varying dual accelerated gradient method for minimizing the average of multiple strongly convex and smooth functions over a time-varying network. Experimental results demonstrate its high efficiency.
We propose a time-varying dual accelerated gradient method for minimizing the average of n strongly convex and smooth functions over a time-varying network with n nodes. We prove that the time-varying dual accelerated gradient ascent method converges at an R-linear rate with the time to reach an epsilon-neighborhood of the solution being of O(1/ln(1/c) ln M/epsilon), where c is a constant depending on the graph and objective function parameters and M is a constant depending on the initial values. We test the proposed method on two classes of problems: L-2-regularized least squares and logistic classification problems. For each class, we generate 1000 problems and use the Dolan-More performance profiles to compare our obtained results with the ones obtained by several state-of-the-art algorithms to illustrate the efficiency of our method. (C) 2022 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据