4.7 Article

Edge-Based Stochastic Gradient Algorithm for Distributed Optimization

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TNSE.2019.2933177

关键词

Convergence; Optimization; Linear programming; Convex functions; Laplace equations; Machine learning; Training; Distributed convex optimization; machine learning; augmented Lagrange; stochastic averaging gradient

资金

  1. National Natural Science Foundation of China [61773321, 61673080]
  2. Innovation Support Program for Chongqing Overseas Returnees [cx2017043]
  3. Chongqing Postdoctoral Science Foundation [Xm2017100]

向作者/读者索取更多资源

This paper investigates distributed optimization problems where a group of networked nodes collaboratively minimizes the sum of all local objective functions. The local objective function of each node is further set as an average of a finite set of subfunctions. This adjustment is motivated by machine learning problems with large training samples distributed and known privately to individual computational nodes. An augmented Lagrange (AL) stochastic gradient algorithm is presented to address the distributed optimization problem, which is integrated with the factorization of weighted Laplacian and local unbiased stochastic averaging gradient methods. At each iteration, only one randomly selected gradient of a subfunction is evaluated at a node, and a variance-reduced stochastic averaging gradient technique is applied to approximate the gradient of local objective function. Strong convexity of the local subfunction and Lipschitz continuity of its gradient are shown to ensure a linear convergence rate of the proposed algorithm in expectation. Numerical experiments on a logistic regression problem demonstrate the correctness of theoretical results.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据