3.8 Proceedings Paper

Oblivious Neural Network Predictions via MiniONN Transformations

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3133956.3134056

Keywords

privacy; machine learning; neural network predictions; secure two-party computation

Funding

  1. TEKES - the Finnish Funding Agency for Innovation (CloSer project) [3881/31/2016]
  2. Intel (Intel Collaborative Research Institute for Secure Computing, ICRI-SC)

Ask authors/readers for more resources

Machine learning models hosted in a cloud service are increasingly popular but risk privacy: clients sending prediction requests to the service need to disclose potentially sensitive information. In this paper, we explore the problem of privacy-preserving predictions: after each prediction, the server learns nothing about clients' input and clients learn nothing about the model. We present MiniONN, the first approach for transforming an existing neural network to an oblivious neural network supporting privacy-preserving predictions with reasonable efficiency. Unlike prior work, MiniONN requires no change to how models are trained. To this end, we design oblivious protocols for commonly used operations in neural network prediction models. We show that MiniONN outperforms existing work in terms of response latency and message sizes. We demonstrate the wide applicability of MiniONN by transforming several typical neural network models trained from standard datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available