4.7 Article

Differentially Private ADMM for Regularized Consensus Optimization

期刊

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
卷 66, 期 8, 页码 3718-3725

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2020.3022856

关键词

Privacy; Cost function; Convergence; Convex functions; Machine learning; ADMM; differential privacy; distributed optimization

资金

  1. Army Research Office [W911NF-16-1-0448]
  2. NSF [SaTC-1618768, CPS-1739344, 1704274, 1741338]
  3. Division Of Computer and Network Systems
  4. Direct For Computer & Info Scie & Enginr [1704274] Funding Source: National Science Foundation

向作者/读者索取更多资源

This article introduces a new variant of ADMM that can preserve agents' differential privacy in consensus optimization. The study shows that to achieve the best convergence performance at a certain privacy level, the magnitude of injected noise should decrease as the algorithm progresses.
Due to its broad applicability in machine learning, resource allocation, and control, the alternating direction method of multipliers (ADMM) has been extensively studied in the literature. The message exchange of the ADMM in multiagent optimization may reveal sensitive information of agents, which can be overheard by malicious attackers. This drawback hinders the application of the ADMM to privacy-aware multiagent systems. In this article, we consider consensus optimization with regularization, in which the cost function of each agent contains private sensitive information, e.g., private data in machine learning, and private usage patterns in resource allocation. We develop a variant of the ADMM that can preserve agents' differential privacy by injecting noise into the public signals broadcast to the agents. We derive conditions on the magnitudes of the added noise under which the designated level of differential privacy can be achieved. Furthermore, the convergence properties of the proposed differentially private ADMM are analyzed under the assumption that the cost functions are strongly convex with Lipschitz continuous gradients, and the regularizer has smooth gradients or bounded subgradients. We find that to attain the best convergence performance given a certain privacy level, the magnitude of the injected noise should decrease as the algorithm progresses. Additionally, the choice of the number of iterations should balance the tradeoff between the convergence, and the privacy leakage of the ADMM, which is explicitly characterized by the derived upper bounds on convergence performance. Finally, numerical results are presented to corroborate the efficacy of the proposed algorithm. In particular, we apply the proposed algorithm to multiagent linear-quadratic control with private information to showcase its merit in control applications.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据