3.8 Proceedings Paper

Differentially-Private Deep Learning from an Optimization Perspective

Publisher

IEEE
DOI: 10.1109/infocom.2019.8737494

Keywords

crowdsourcing; data mining; differential privacy; deep learning; optimization

Ask authors/readers for more resources

With the amount of user data crowdsourced for data mining dramatically increasing, there is an urgent need to protect the privacy of individuals. Differential privacy mechanisms are conventionally adopted to add noise to the user data, so that. an adversary is not able to gain any additional knowledge about individuals participating in the crowdsourcing, by inferring from the learned model. However, such protection is usually achieved with significantly degraded learning results. We have observed that the fundamental cause of this problem is that the relationship between model utility and data privacy is not accurately characterized, leading to privacy constraints that are overly strict. In this paper, we address this problem from an optimization perspective, and formulate the problem as one that minimizes the accuracy loss given a set of privacy constraints. We use sensitivity to describe the impact of perturbation noise to the model utility, and propose a new optimized additive noise mechanism that improves overall learning accuracy while conforming to individual privacy constraints. As a highlight of our privacy mechanism, it is highly robust in the high privacy regime (when epsilon -> 0), and against any changes in the model structure and experimental settings.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available