4.7 Article

Differentially Private Distributed Optimization via State and Direction Perturbation in Multiagent Systems

期刊

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
卷 67, 期 2, 页码 722-737

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2021.3059427

关键词

Convergence; Cost function; Privacy; Differential privacy; Perturbation methods; Multi-agent systems; Sensor fusion; Differential privacy; distributed optimization (DO); gradient tracking; mean-square error (MSE)

资金

  1. NSF of China [61922058, 62025305, 62003302, 61933009]
  2. NSF of Shanghai Municipality of China [18ZR1419900]
  3. Program of Shanghai Academic Research Leader [19XD1421800]

向作者/读者索取更多资源

This article studies the problem of achieving convergence and differential privacy in distributed optimization algorithms. It proposes the Distributed algorithm via Direction and State Perturbation (DiaDSP) which perturbs both states and directions using decaying Laplace noise. The algorithm is shown to achieve convergence in mean and almost surely, even with a constant stepsize. It also establishes the optimal stepsize for the fastest convergence rate and describes the tradeoff between differential privacy and convergence accuracy.
This article studies the problem of distributed optimization in multiagent systems where each agent seeks to minimize the sum of all agents' objective functions using only local information. Under the requirement of security, each agent needs to keep its objective function private from other agents and potential eavesdroppers. We first prove the impossibility of guaranteeing convergence and differential privacy simultaneously by perturbing states in exact distributed optimization algorithms. Motivated by this result, we design a completely distributed algorithm, Distributed algorithm via Direction and State Perturbation (DiaDSP), that achieves differential privacy by perturbing both states and directions with decaying Laplace noise. Different from most of the existing works that require decaying stepsizes to ensure convergence, we show that our DiaDSP algorithm converges in mean and almost surely even with a constant stepsize. In particular, we prove linear convergence in mean by only assuming that the sum of all cost functions is strongly convex. The R-linear convergence is proved under the assumption of Lipschitz gradients instead of that of bounded gradients. The optimal stepsize for the fastest convergence rate is also established. Moreover, we describe the privacy properties and characterize the tradeoff between differential privacy and convergence accuracy. Simulations are conducted on a typical sensor fusion problem to validate the theoretical results.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据