3.8 Proceedings Paper

Privacy-Sensitive Parallel Split Learning

Publisher

IEEE
DOI: 10.1109/icoin48656.2020.9016486

Keywords

Distributed Deep Learning; Split Learning; Federated Learning

Funding

  1. National Research Foundation of Korea [2019M3E4A1080391]
  2. Ministry of Health and Welfare [HI19C0572, HI19C0842]

Ask authors/readers for more resources

Mobile devices and medical centers have access to rich data that is suitable for training deep learning models. However, these highly distributed datasets are privacy sensitive making privacy issues for applying deep learning techniques to the problem at hand. Split Learning can solve these data privacy problems, but the possibility of overfitting exists because each node doesn't train in parallel but in a sequential manner. In this paper, we propose a parallel split learning method that prevents overfitting due to differences in a training order and data size by the node. Our method selects mini-batch size considering the amount of local data on each node and synchronizes the layers that nodes have during the training process so that all nodes can use the equivalent deep learning model when the training is complete.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available