4.7 Article

dsMTL: a computational framework for privacy-preserving, distributed multi-task machine learning

期刊

BIOINFORMATICS
卷 38, 期 21, 页码 4919-4926

出版社

OXFORD UNIV PRESS
DOI: 10.1093/bioinformatics/btac616

关键词

-

资金

  1. Deutsche Forschungsgemeinschaft (DFG) [SCHW 1768/1-1]
  2. German Federal Ministry of Education and Research (BMBF) [01KU1905A, 01ZX1904A]
  3. European Union [826078, 777111]
  4. Intramural Research Program of the NIMH [900142]

向作者/读者索取更多资源

This study developed a privacy-preserving, distributed multi-task machine learning framework that can model comorbidity on multiple data sources and achieved better performance compared to traditional methods.
Motivation: In multi-cohort machine learning studies, it is critical to differentiate between effects that are reproducible across cohorts and those that are cohort-specific. Multi-task learning (MTL) is a machine learning approach that facilitates this differentiation through the simultaneous learning of prediction tasks across cohorts. Since multi-cohort data can often not be combined into a single storage solution, there would be the substantial utility of an MTL application for geographically distributed data sources. Results: Here, we describe the development of 'dsMTL', a computational framework for privacy-preserving, distributed multi-task machine learning that includes three supervised and one unsupervised algorithms. First, we derive the theoretical properties of these methods and the relevant machine learning workflows to ensure the validity of the software implementation. Second, we implement dsMTL as a library for the R programming language, building on the DataSHIELD platform that supports the federated analysis of sensitive individual-level data. Third, we demonstrate the applicability of dsMTL for comorbidity modeling in distributed data. We show that comorbidity modeling using dsMTL outperformed conventional, federated machine learning, as well as the aggregation of multiple models built on the distributed datasets individually. The application of dsMTL was computationally efficient and highly scalable when applied to moderate-size (n < 500), real expression data given the actual network latency.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据