期刊
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING
卷 165, 期 -, 页码 130-141出版社
ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jpdc.2022.03.014
关键词
Distributed convex optimization; Time-varying network optimization
资金
- Sharif University of Technology
This study proposes a time-varying dual accelerated gradient method for minimizing the average of multiple strongly convex and smooth functions over a time-varying network. Experimental results demonstrate its high efficiency.
We propose a time-varying dual accelerated gradient method for minimizing the average of n strongly convex and smooth functions over a time-varying network with n nodes. We prove that the time-varying dual accelerated gradient ascent method converges at an R-linear rate with the time to reach an epsilon-neighborhood of the solution being of O(1/ln(1/c) ln M/epsilon), where c is a constant depending on the graph and objective function parameters and M is a constant depending on the initial values. We test the proposed method on two classes of problems: L-2-regularized least squares and logistic classification problems. For each class, we generate 1000 problems and use the Dolan-More performance profiles to compare our obtained results with the ones obtained by several state-of-the-art algorithms to illustrate the efficiency of our method. (C) 2022 Elsevier Inc. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据