4.7 Article

Fast Convergence Rates for Distributed Non-Bayesian Learning

期刊

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
卷 62, 期 11, 页码 5538-5553

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2017.2690401

关键词

Algorithm design and analysis; Bayes methods; distributed algorithms; estimation; learning

资金

  1. National Science Foundation [CCF-1017564, CMMI-1463262]
  2. Office of Naval Research [N00014-12-1-0998]
  3. Div Of Civil, Mechanical, & Manufact Inn
  4. Directorate For Engineering [1740452] Funding Source: National Science Foundation

向作者/读者索取更多资源

We consider the problem of distributed learning, where a network of agents collectively aim to agree on a hypothesis that best explains a set of distributed observations of conditionally independent random processes. We propose a distributed algorithm and establish consistency, as well as a nonasymptotic, explicit, and geometric convergence rate for the concentration of the beliefs around the set of optimal hypotheses. Additionally, if the agents interact over static networks, we provide an improved learning protocol with better scalability with respect to the number of nodes in the network.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据