4.7 Article

Privacy-Preserving Computation Offloading for Parallel Deep Neural Networks Training

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2020.3040734

关键词

Servers; Training; Computational modeling; Privacy; Data models; Task analysis; Cryptography; Deep neural network; federated learning; computation offloading; data privacy; model parallelism

资金

  1. National Key R&D Program of China [2018YFB1004301, BK20190294, NSFC-61902176, NSFC-61872176]
  2. Fundamental Research Funds for the Central Universities [14380069]
  3. US National Science Foundation [CNS-1816399]
  4. Commonwealth Cyber Initiative

向作者/读者索取更多资源

This article proposes an alternative DNN training strategy for resource-limited users, allowing them to offload their tasks to an untrusted server in a privacy-preserving manner. By studying the possibility of DNN separation and designing a differentially private activation algorithm, the solution is extended to support parallel DNN model training.
Deep neural networks (DNNs) have brought significant performance improvements to various real-life applications. However, a DNN training task commonly requires intensive computing resources and a huge data collection, which makes it hard for personal devices to carry out the entire training, especially for mobile devices. The federated learning concept has eased this situation. However, it is still an open problem for individuals to train their own DNN models at an affordable price. In this article, we propose an alternative DNN training strategy for resource-limited users. With the help of an untrusted server, end users can offload their DNN training tasks to the server in a privacy-preserving manner. To this end, we study the possibility of the separation of a DNN. Then we design a differentially private activation algorithm for end users to ensure the privacy of the offloading after model separation. Furthermore, to meet the rising demand for federated learning, we extend the offloading solution to parallel DNN models training with a secure model weights aggregation scheme for the privacy concern. Experimental results prove the feasibility of computation offloading solutions for DNN models in both solo and parallel modes.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据