4.7 Article

An adaptive federated learning scheme with differential privacy preserving

Publisher

ELSEVIER
DOI: 10.1016/j.future.2021.09.015

Keywords

Federated learning; Differential Privacy; Adaptive gradient descent; Privacy preserving

Funding

  1. National Natural Science Foundation of China [62003291]
  2. Xuzhou Science and Technology Project [KC20112]
  3. Project of Philosophy and Social Science Research in Colleges and Universities in Jiangsu Province [2020SJA1056]
  4. Industry-University-Research Cooperation Project of Jiangsu Science and Technology Department [BY2018124]
  5. National Science and Technology Foundation Project [2019FY100103]

Ask authors/readers for more resources

This paper proposes a federated learning scheme combined with the adaptive gradient descent strategy and differential privacy mechanism for multi-party collaborative modeling scenarios. The scheme improves modeling efficiency and performance with limited communication costs, while providing strong privacy protection for federated learning process.
Driven by the upcoming development of the sixth-generation communication system (6G), the distributed machine learning schemes represented by federated learning has shown advantages in data utilization and multi-party cooperative model training. The total communication costs of federated learning is related to the number of communication rounds, the communication consumption of each participants, the setting of reasonable learning rate and the guarantee of calculation fairness. In addition, the isolating data strategy in the federated learning framework cannot completely guarantee the privacy security of users. Motivated by the above problems, this paper proposes a federated learning scheme combined with the adaptive gradient descent strategy and differential privacy mechanism, which is suitable for multi-party collaborative modeling scenarios. To ensure that federated learning scheme can train efficiently with limited communications costs, the adaptive learning rate algorithm is innovatively used to adjust the gradient descent process and avoid the model overfitting and fluctuation phenomena, so as to improve the modeling efficiency and performance in multi-party calculation scenarios. Furthermore, in order to adapt to the ultra-large-scale distributed secure computing scenario, this research introduces differential privacy mechanism to resist various background knowledge attacks. Experimental results demonstrate that the proposed adaptive federated learning model performs better than the traditional models under fixed communication costs. This novel modeling scheme also has strong robustness to different super-parameter settings and provides stronger quantifiable privacy preserving for federated learning process. (C) 2021 Published by Elsevier B.V.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available