Journal
INFORMATION SCIENCES
Volume 538, Issue -, Pages 39-53Publisher
ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2020.05.119
Keywords
Multi-agent systems; Distributed optimization; Differential privacy; DAL
Categories
Ask authors/readers for more resources
This paper is concerned with the private distributed optimization problem for multi-agent systems, where all agents cooperatively minimize the sum of individual convex objective functions with the additional requirement that the functions remain differential privacy. Based on the augmented Lagrangian algorithm, an effective distributed algorithm is presented to ensure the privacy by perturbing primal estimates with additive Laplace noises. The proposed algorithm is more likely to have a faster convergence rate compared to the primal-based algorithms, and has less computational burden contrast with the ADMM-based private algorithms. By means of an important factorization of weighted Laplacian matrix, it is proven that the error between optimal solution and ergodic average estimates is bounded, which is related to the privacy level. Within this framework, trade-offs between privacy level and convergence accuracy are analyzed. Finally the effectiveness of the proposed algorithm is illustrated by numerical examples. (C) 2020 Elsevier Inc. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available