4.7 Article

Distributed Gradient Methods for Convex Machine Learning Problems in Networks: Distributed Optimization

期刊

IEEE SIGNAL PROCESSING MAGAZINE
卷 37, 期 3, 页码 92-101

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MSP.2020.2975210

关键词

-

资金

  1. National Science Foundation [CCF-1717391]
  2. U.S. Navy [N000141612245]
  3. U.S. Department of Defense (DOD) [N000141612245] Funding Source: U.S. Department of Defense (DOD)

向作者/读者索取更多资源

This article provides an overview of distributed gradient methods for solving convex machine learning problems of the form minx & in; R-n (1/m) & sum;(m)(i = 1) f(i)(x) in a system consisting of m agents that are embedded in a communication network. Each agent i has a collection of data captured by its privately known objective function f(i)(x). The distributed algorithms considered here obey two simple rules: privately known agent functions f(i)(x) cannot be disclosed to any other agent in the network and every agent is aware of the local connectivity structure of the network, i.e., it knows its one-hop neighbors only. While obeying these two rules, the distributed algorithms that agents execute should find a solution to the overall system problem with the limited knowledge of the objective function and limited local communications. Given in this article is an overview of such algorithms that typically involve two update steps: a gradient step based on the agent local objective function and a mixing step that essentially diffuses relevant information from one to all other agents in the network.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据