4.7 Article

Defenses Against Byzantine Attacks in Distributed Deep Neural Networks

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TNSE.2020.3035112

Keywords

Training; Neural networks; Machine learning; Computational modeling; Hardware; Servers; Convergence; Byzantine attacks; deep learning; distributed system

Funding

  1. US National Science Foundation [CNS-1816399]
  2. Commonwealth Cyber Initiative, an investment in the advancement of cyber R&D, innovation and workforce development

Ask authors/readers for more resources

Deep learning has gained popularity recently, but distributed systems may face Byzantine attacks. To address this threat, two efficient algorithms, FABA and VBOR, are proposed, demonstrating superior performance in experiments.
Large scale deep learning is trending recently, since people find that complicated networks can sometimes reach high accuracy on image recognition, natural language processing, etc. With the increasing complexity and batch size of deep neural networks, the training for such networks is more difficult because of the limited computational power and memory. Distributed machine learning or deep learning provides an efficient solution. However, with the concern of untrusted machines or hardware failures, the distributed system may suffer Byzantine attacks. If some workers are attacked and just upload malicious gradients to the parameter server, they will lead the total training process to a wrong model or even cannot converge. To defend the Byzantine attacks, we propose two efficient algorithms: FABA, a Fast Aggregation algorithm against Byzantine Attacks, and VBOR, a Variance Based Outlier Removal algorithm. FABA conducts the distance information to remove outliers one by one. VBOR uses the variance information to remove outliers with one-pass iteration. Theoretically, we prove the convergence of our algorithms and give an insight of the correctness. In the experiment, we compare FABA and VBOR with the state-of-the-art Byzantine defense algorithms and show our superior performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available