4.2 Article

Defending non-Bayesian learning against adversarial attacks

期刊

DISTRIBUTED COMPUTING
卷 32, 期 4, 页码 277-289

出版社

SPRINGER
DOI: 10.1007/s00446-018-0336-4

关键词

Distributed learning; Byzantine agreement; Fault-tolerance; Adversary attacks; Security

资金

  1. National Science Foundation award NSF [1421918]
  2. Division Of Computer and Network Systems
  3. Direct For Computer & Info Scie & Enginr [1421918] Funding Source: National Science Foundation

向作者/读者索取更多资源

This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state out of m alternatives. We focus on the impact of adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local learning updates with consensus primitives. In particular, we consider the scenario where an unknown subset of agents suffer Byzantine faults-agents suffering Byzantine faults behave arbitrarily. We propose two learning rules. In our learning rules, each non-faulty agent keeps a local variable which is a stochastic vector over the m possible states. Entries of this stochastic vector can be viewed as the scores assigned to the corresponding states by that agent. We say a non-faulty agent learns the underlying truth if it assigns one to the true state and zeros to the wrong states asymptotically.In our first update rule, each agent updates its local score vector as (up to normalization) the product of (1) the likelihood of the cumulative private signals and (2) the weighted geometric average of the score vectors of its incoming neighbors and itself. Under reasonable assumptions on the underlying network structure and the global identifiability of the network, we show that all the non-faulty agents asymptotically learn the true state almost surely.We propose a modified variant of our first learning rule whose complexity per iteration per agent is O(m2nlogn), where n is the number of agents in the network. In addition, we show that this modified learning rule works under a less restrictive network identifiability condition.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.2
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据