3.8 Proceedings Paper

Privacy-Sensitive Parallel Split Learning

出版社

IEEE
DOI: 10.1109/icoin48656.2020.9016486

关键词

Distributed Deep Learning; Split Learning; Federated Learning

资金

  1. National Research Foundation of Korea [2019M3E4A1080391]
  2. Ministry of Health and Welfare [HI19C0572, HI19C0842]

向作者/读者索取更多资源

Mobile devices and medical centers have access to rich data that is suitable for training deep learning models. However, these highly distributed datasets are privacy sensitive making privacy issues for applying deep learning techniques to the problem at hand. Split Learning can solve these data privacy problems, but the possibility of overfitting exists because each node doesn't train in parallel but in a sequential manner. In this paper, we propose a parallel split learning method that prevents overfitting due to differences in a training order and data size by the node. Our method selects mini-batch size considering the amount of local data on each node and synchronizes the layers that nodes have during the training process so that all nodes can use the equivalent deep learning model when the training is complete.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据