4.7 Article

Decentralized Stochastic Optimization With Inherent Privacy Protection

期刊

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
卷 68, 期 4, 页码 2293-2308

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2022.3174187

关键词

Privacy; Optimization; Data privacy; Distributed databases; Noise measurement; Linear programming; Estimation; Collaborative machine learning; decentralized gradient methods; decentralized stochastic optimization; privacy protection

向作者/读者索取更多资源

This article proposes a decentralized stochastic gradient descent (SGD) algorithm that provides inherent privacy protection for participating agents. The algorithm uses a dynamics-based gradient-obfuscation mechanism to ensure privacy without compromising optimization accuracy. It avoids the heavy communication or computation overhead associated with encryption-based privacy solutions.
Decentralized stochastic optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing. Since involved data usually contain sensitive information like user locations, healthcare records, and financial transactions, privacy protection has become an increasingly pressing need in the implementation of decentralized stochastic optimization algorithms. In this article, we propose a decentralized stochastic gradient descent (SGD) algorithm, which is embedded with inherent privacy protection for every participating agent against other participating agents and external eavesdroppers. This proposed algorithm builds in a dynamics based gradient-obfuscation mechanism to enable privacy protection without compromising optimization accuracy, which is in significant difference from differential-privacy based privacy solutions for decentralized optimization that have to trade optimization accuracy for privacy. The dynamics based privacy approach is encryption-free, and hence avoids incurring heavy communication or computation overhead, which is a common problem with encryption based privacy solutions for decentralized stochastic optimization. Besides rigorously characterizing the convergence performance of the proposed decentralized SGD algorithm under both convex objective functions and nonconvex objective functions, we also provide rigorous information-theoretic analysis of its strength of privacy protection. Simulation results for a distributed estimation problem as well as numerical experiments for decentralized learning on a benchmark machine learning dataset confirm the effectiveness of the proposed approach.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据