4.6 Article

Stochastic alternating direction method of multipliers for Byzantine-robust distributed learning

Journal

SIGNAL PROCESSING
Volume 195, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.sigpro.2022.108501

Keywords

Distributed machine learning; Alternating direction method of multipliers (ADMM); Byzantine attacks

Ask authors/readers for more resources

This paper addresses the problem of distributed learning under Byzantine attacks and proposes a Byzantine-robust stochastic ADMM method. The effectiveness of the proposed method is demonstrated through theoretical analysis and numerical experiments.
This paper aims to solve a distributed learning problem under Byzantine attacks. In the underlying distributed master-worker architecture, there exist a number of unknown but malicious workers that can send arbitrary messages to the master to deviate the learning process, called Byzantine workers. In the literature, a total variation (TV) norm-penalized approximation formulation has been investigated to alleviate the effect of Byzantine attacks. To be specific, the TV norm penalty not only forces the local variables at the regular workers to be close, but is robust to the outliers sent by the Byzantine workers as well. For handling the separable TV norm-penalized approximation formulation, we propose a Byzantinerobust stochastic alternating direction method of multipliers (ADMM). Theoretically, we prove that the proposed method converges to a bounded neighborhood of the optimal solution at a rate of O(1/k) under mild assumptions, where k is the number of iterations and the size of neighborhood is determined by the number of Byzantine workers. Numerical experiments on the MNIST and COVERTYPE datasets further demonstrate the effectiveness of the proposed method to various Byzantine attacks. (C) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available