4.7 Article

Privacy in Neural Network Learning: Threats and Countermeasures

Journal

IEEE NETWORK
Volume 32, Issue 4, Pages 61-67

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MNET.2018.1700447

Keywords

-

Funding

  1. National Natural Science Foundation of China [61672151, 61370205, 61772340, 61472255, 61420106010]
  2. Fundamental Research Funds for the Central Universities [EG2018028]
  3. Shanghai Rising-Star Program [17QA1400100]
  4. DHU Distinguished Young Professor Program

Ask authors/readers for more resources

Algorithmic breakthroughs, the feasibility of collecting huge amount of data, and increasing computational power, contribute to the remarkable achievements of NNs. In particular, since Deep Neural Network (DNN) learning presents astonishing results in speech and image recognition, the amount of sophisticated applications based on it has exploded. However, the increasing number of instances of privacy leakage has been reported, and the corresponding severe consequences have caused great worry in this area. In this article, we focus on privacy issues in NN learning. First, we identify the privacy threats during NN training, and present privacy-preserving training schemes in terms of using centralized and distributed approaches. Second, we consider the privacy of prediction requests, and discuss the privacy-preserving protocols for NN prediction. We also analyze the privacy vulnerabilities of trained models. Three types of attacks on private information embedded in trained NN models are discussed, and a differential privacy-based solution is introduced.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available