4.6 Article

Optimizing Privacy-Preserving Outsourced Convolutional Neural Network Predictions

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TDSC.2020.3029899

关键词

Secure outsourcing; machine learning; convolutional neural network; homomorphic encryption

资金

  1. National Key Research and Development Program of China [2020AAA0107700]
  2. NSFC [61822207, U1636219, 61822309, 61773310]
  3. Outstanding Youth Foundation of Hubei Province [2017C FA047]
  4. Fundamental Research Funds for the Central Universities [2042019kf0210, 2020kfyXJJS075]
  5. General Research Fund from Research Grants Council, University Grants Committee, Hong Kong [CUHK 14210319]

向作者/读者索取更多资源

This article proposes a CNN prediction scheme that preserves privacy in the outsourced setting, and the scheme achieves significant improvements in overall latency and accuracy through optimizations such as secret sharing and triplet generation.
Convolutional neural networks (CNN) is a popular architecture in machine learning for its predictive power, notably in computer vision and medical image analysis. Its great predictive power requires extensive computation, which encourages model owners to host the prediction service in a cloud platform. This article proposes a CNN prediction scheme that preserves privacy in the outsourced setting, i.e., the model-hosting server cannot learn the query, (intermediate) results, and the model. Similar to SecureML (S&P'17), a representative work that provides model privacy, we employ two non-colluding servers with secret sharing and triplet generation to minimize the usage of heavyweight cryptography. We made the following optimizations for both overall latency and accuracy. 1) We adopt asynchronous computation and SIMD for offline triplet generation and parallelizable online computation. 2) As MiniONN (CCS'17) and its improvement by the generic EzPC compiler (EuroS&P'19), we use a garbled circuit for the non-polynomial ReLU activation to keep the same accuracy as the underlying network (instead of approximating it in SecureML prediction). 3) For the pooling in CNN, we employ (linear) average-pooling, which achieves almost the same accuracy as the (non-linear, and hence less efficient) max-pooling exhibited by MiniONN and EzPC. Considering both offline and online costs, our experiments on the MNIST dataset show a latency reduction of 122 x, 14.63 x, and 36.69x compared to SecureML, MiniONN, and EzPC; and a reduction of communication costs by 1.09 x, 36.69 x, and 31.32 x, respectively. On the CIFAR dataset, our scheme achieves a lower latency by 7.14x and 3.48x and lower communication costs by 13.88x and 77.46x when compared with MiniONN and EzPC, respectively.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据