4.7 Article

Differentially Private ADMM for Regularized Consensus Optimization

Journal

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
Volume 66, Issue 8, Pages 3718-3725

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2020.3022856

Keywords

Privacy; Cost function; Convergence; Convex functions; Machine learning; ADMM; differential privacy; distributed optimization

Funding

  1. Army Research Office [W911NF-16-1-0448]
  2. NSF [SaTC-1618768, CPS-1739344, 1704274, 1741338]
  3. Division Of Computer and Network Systems
  4. Direct For Computer & Info Scie & Enginr [1704274] Funding Source: National Science Foundation

Ask authors/readers for more resources

This article introduces a new variant of ADMM that can preserve agents' differential privacy in consensus optimization. The study shows that to achieve the best convergence performance at a certain privacy level, the magnitude of injected noise should decrease as the algorithm progresses.
Due to its broad applicability in machine learning, resource allocation, and control, the alternating direction method of multipliers (ADMM) has been extensively studied in the literature. The message exchange of the ADMM in multiagent optimization may reveal sensitive information of agents, which can be overheard by malicious attackers. This drawback hinders the application of the ADMM to privacy-aware multiagent systems. In this article, we consider consensus optimization with regularization, in which the cost function of each agent contains private sensitive information, e.g., private data in machine learning, and private usage patterns in resource allocation. We develop a variant of the ADMM that can preserve agents' differential privacy by injecting noise into the public signals broadcast to the agents. We derive conditions on the magnitudes of the added noise under which the designated level of differential privacy can be achieved. Furthermore, the convergence properties of the proposed differentially private ADMM are analyzed under the assumption that the cost functions are strongly convex with Lipschitz continuous gradients, and the regularizer has smooth gradients or bounded subgradients. We find that to attain the best convergence performance given a certain privacy level, the magnitude of the injected noise should decrease as the algorithm progresses. Additionally, the choice of the number of iterations should balance the tradeoff between the convergence, and the privacy leakage of the ADMM, which is explicitly characterized by the derived upper bounds on convergence performance. Finally, numerical results are presented to corroborate the efficacy of the proposed algorithm. In particular, we apply the proposed algorithm to multiagent linear-quadratic control with private information to showcase its merit in control applications.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available