4.7 Article

Federated learning for preserving data privacy in collaborative healthcare research

期刊

DIGITAL HEALTH
卷 8, 期 -, 页码 -

出版社

SAGE PUBLICATIONS LTD
DOI: 10.1177/20552076221134455

关键词

Federated learning; deep learning; data; security; privacy

资金

  1. National Institute of General Medical Sciences (NIGMS) of the National Institutes of Health [K23GM140268]
  2. Thomas H. Maren Fund
  3. National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health from the National Institute of General Medical Sciences [K01DK120784, R01GM110240]
  4. UF Research [AWD09459]
  5. Gatorade Trust, University of Florida
  6. National Science Foundation CAREER award from the NIA [1750192, P30AG028740, R01AG05533]
  7. NIGMS [R01GM110240]
  8. NIBIB [1R21EB027344]
  9. National Center for Advancing Translational Sciences and Clinical and Translational Sciences Award [UL1TR000064]
  10. Div Of Information & Intelligent Systems
  11. Direct For Computer & Info Scie & Enginr [1750192] Funding Source: National Science Foundation

向作者/读者索取更多资源

In healthcare AI applications, maintaining generalizability, external validity, and reproducibility is crucial. Traditional approaches of sharing patient data can compromise data privacy and security. Federated learning techniques offer an alternative by sharing knowledge instead of data, preserving both data privacy and availability.
Generalizability, external validity, and reproducibility are high priorities for artificial intelligence applications in healthcare. Traditional approaches to addressing these elements involve sharing patient data between institutions or practice settings, which can compromise data privacy (individuals' right to prevent the sharing and disclosure of information about themselves) and data security (simultaneously preserving confidentiality, accuracy, fidelity, and availability of data). This article describes insights from real-world implementation of federated learning techniques that offer opportunities to maintain both data privacy and availability via collaborative machine learning that shares knowledge, not data. Local models are trained separately on local data. As they train, they send local model updates (e.g. coefficients or gradients) for consolidation into a global model. In some use cases, global models outperform local models on new, previously unseen local datasets, suggesting that collaborative learning from a greater number of examples, including a greater number of rare cases, may improve predictive performance. Even when sharing model updates rather than data, privacy leakage can occur when adversaries perform property or membership inference attacks which can be used to ascertain information about the training set. Emerging techniques mitigate risk from adversarial attacks, allowing investigators to maintain both data privacy and availability in collaborative healthcare research. When data heterogeneity between participating centers is high, personalized algorithms may offer greater generalizability by improving performance on data from centers with proportionately smaller training sample sizes. Properly applied, federated learning has the potential to optimize the reproducibility and performance of collaborative learning while preserving data security and privacy.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据