3.8 Proceedings Paper

Comparison of Privacy-Preserving Distributed Deep Learning Methods in Healthcare

期刊

出版社

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-030-80432-9_34

关键词

Privacy-preserving; Distributed deep learning; Federated learning; Split learning; SplitFed; Medical imaging

向作者/读者索取更多资源

This study compares three privacy-preserving distributed learning techniques in developing binary classification models for detecting tuberculosis, analyzing their performance in terms of classification, communication and computational costs, and training time. A novel distributed learning architecture called SplitFedv3 and alternate mini-batch training for split learning are proposed, showing better performance than existing methods in experiments.
Data privacy regulations pose an obstacle to healthcare centres and hospitals to share medical data with other organizations, which in turn impedes the process of building deep learning models in the healthcare domain. Distributed deep learning methods enable deep learning models to be trained without the need for sharing data from these centres while still preserving the privacy of the data at these centres. In this paper, we compare three privacy-preserving distributed learning techniques: federated learning, split learning, and SplitFed. We use these techniques to develop binary classification models for detecting tuberculosis from chest X-rays and compare them in terms of classification performance, communication and computational costs, and training time. We propose a novel distributed learning architecture called SplitFedv3, which performs better than split learning and SplitFedv2 in our experiments. We also propose alternate mini-batch training, a new training technique for split learning, that performs better than alternate client training, where clients take turns to train a model.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据