4.6 Article

DNN-DP: Differential Privacy Enabled Deep Neural Network Learning Framework for Sensitive Crowdsourcing Data

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSS.2019.2950017

关键词

Data models; Training; Data privacy; Crowdsourcing; Privacy; Predictive models; Computational modeling; Adaptive noise; crowdsourcing data; deep neural network (DNN); differential privacy (DP)

资金

  1. Qinglan Project of Jiangsu Province

向作者/读者索取更多资源

Deep neural network (DNN) learning has witnessed significant applications in various fields, especially for prediction and classification. Frequently, the data used for training are provided by crowdsourcing workers, and the training process may violate their privacy. A qualified prediction model should protect the data privacy in training and classification/prediction phases. To address this issue, we develop a differential privacy (DP)-enabled DNN learning framework, DNN-DP, that intentionally injects noise to the affine transformation of the input data features and provides DP protection for the crowdsourced sensitive training data. Specifically, we correspondingly estimate the importance of each feature related to target categories and follow the principle that less noise is injected into the more important feature to ensure the data utility of the model. Moreover, we design an adaptive coefficient for the added noise to accommodate the heterogeneous feature value ranges. Theoretical analysis proves that DNN-DP preserves ${\varepsilon }$ -differentially private in the computation. Moreover, the simulation based on the US Census data set demonstrates the superiority of our method in predictive accuracy compared with other existing privacy-aware machine learning methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据