4.6 Article

Hybrid differential privacy based federated learning for Internet of Things

Journal

JOURNAL OF SYSTEMS ARCHITECTURE
Volume 124, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.sysarc.2022.102418

Keywords

Federated learning; Privacy protection; Differential privacy; Convergence performance

Funding

  1. NSFC, China [62136002, 61972155, 61825205]
  2. Natural Science Foundation of Shanghai,China [21ZR1419900]
  3. Science and Technology Commission ofShanghai Municipality, China [20DZ1100300]
  4. Open Project Fund from Shenzhen Institute of Artificial Intelligence and Robotics for So-ciety [AC01202005020]
  5. Shanghai Knowledge Service Platform Project [ZF1213]
  6. Open Research Fund of KLATASDS-MOE [KLATASDS2104]
  7. Fundamental Research Funds for the Central Uni-versities, China
  8. Shanghai Trusted Industry Internet Software Col-laborative Innovation Center

Ask authors/readers for more resources

This paper proposes a secure and reliable federated learning algorithm by integrating hybrid differential privacy into federated learning. The algorithm divides users into two categories according to their different privacy needs, and introduces an adaptive gradient clip scheme and an improved composition method to reduce the effects of noise and clip.
Wireless sensor networks have been widely used to achieve fine-grained information collection. However, numerous data acquisition and processing of sensors bring some privacy issues. Federated learning is a promising and privacy-friendly framework that trains a model across multiple devices or edge nodes holding local data samples without transferring their data to the server. It is not enough to protect privacy only by maintaining data locality, so differential privacy technology is often used to protect privacy in federated learning. However, different users have different privacy requirements, so it is inappropriate to use the same privacy protection scheme, assuming that all users trust or distrust the server. The former has poor accuracy, while the latter has poor privacy. This paper proposes a secure and reliable federated learning algorithm by integrating hybrid differential privacy into federated learning. We divide users into two categories according to their different privacy needs. In addition, we analyze the convergence and privacy bounds of the proposed algorithm and propose an adaptive gradient clip scheme and improved composition method to reduce the effects of noise and clip, respectively. The validity of the algorithm is verified by theoretical analysis and experimental evaluation on real-world datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available