4.7 Article

Detecting model misconducts in decentralized healthcare federated learning

Journal

Publisher

ELSEVIER IRELAND LTD
DOI: 10.1016/j.ijmedinf.2021.104658

Keywords

Model Misconducts; Federated Learning; Predictive Modeling; Electronic Health Record; Blockchain Distributed Ledger Technology

Funding

  1. U.S. National Institutes of Health [R00HG009680, R01HL136835, R01GM118609, R01HG011066, U24LM013755]
  2. Graduate Division San Diego Matching Fellowship
  3. San Diego Biomedical Informatics Education & Research (SABER) NIH National Library of Medicine (NLM) [T15LM011271]

Ask authors/readers for more resources

This study aims to propose an algorithm-agnostic approach to detect model misconduct in cross-institutional collaborations and apply it to federated machine learning on genomic/healthcare data. The results show that the proposed method has a high recall rate with low computational cost, effectively identifying misconduct.
Background: To accelerate healthcare/genomic medicine research and facilitate quality improvement, researchers have started cross-institutional collaborations to use artificial intelligence on clinical/genomic data. However, there are real-world risks of incorrect models being submitted to the learning process, due to either unforeseen accidents or malicious intent. This may reduce the incentives for institutions to participate in the federated modeling consortium. Existing methods to deal with this model misconduct issue mainly focus on modifying the learning methods, and therefore are more specifically tied with the algorithm. Basic Procedures: In this paper, we aim at solving the problem in an algorithm-agnostic way by (1) designing a simulator to generate various types of model misconduct, (2) developing a framework to detect the model misconducts, and (3) providing a generalizable approach to identify model misconducts for federated learning. We considered the following three categories: Plagiarism, Fabrication, and Falsification, and then developed a detection framework with three components: Auditing, Coefficient, and Performance detectors, with greedy parameter tuning. Main Findings: We generated 10 types of misconducts from models learned on three datasets to evaluate our detection method. Our experiments showed high recall with low added computational cost. Our proposed detection method can best identify the misconduct on specific sites from any learning iteration, whereas it is more challenging to precisely detect misconducts for a specific site and at a specific iteration. Principal Conclusions: We anticipate our study can support the enhancement of the integrity and reliability of federated machine learning on genomic/healthcare data.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available