期刊
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS
卷 7, 期 1, 页码 215-224出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSS.2019.2950017
关键词
Data models; Training; Data privacy; Crowdsourcing; Privacy; Predictive models; Computational modeling; Adaptive noise; crowdsourcing data; deep neural network (DNN); differential privacy (DP)
资金
- Qinglan Project of Jiangsu Province
Deep neural network (DNN) learning has witnessed significant applications in various fields, especially for prediction and classification. Frequently, the data used for training are provided by crowdsourcing workers, and the training process may violate their privacy. A qualified prediction model should protect the data privacy in training and classification/prediction phases. To address this issue, we develop a differential privacy (DP)-enabled DNN learning framework, DNN-DP, that intentionally injects noise to the affine transformation of the input data features and provides DP protection for the crowdsourced sensitive training data. Specifically, we correspondingly estimate the importance of each feature related to target categories and follow the principle that less noise is injected into the more important feature to ensure the data utility of the model. Moreover, we design an adaptive coefficient for the added noise to accommodate the heterogeneous feature value ranges. Theoretical analysis proves that DNN-DP preserves ${\varepsilon }$ -differentially private in the computation. Moreover, the simulation based on the US Census data set demonstrates the superiority of our method in predictive accuracy compared with other existing privacy-aware machine learning methods.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据