4.7 Article

Privacy-preserving quality control of neuroimaging datasets in federated environments

Journal

HUMAN BRAIN MAPPING
Volume 43, Issue 7, Pages 2289-2310

Publisher

WILEY
DOI: 10.1002/hbm.25788

Keywords

federated neuroimaging; fMRI; quality control; sMRI

Funding

  1. National Institutes of Health [1R01DA040487, R01DA049238, R01MH121246, 2R01EB006841, 2RF1MH121885]
  2. National Science Foundataion [2112455]
  3. Center of Biomedical Research Excellence (COBRE) [P20GM103472, 5P20RR021938]
  4. National Center for Research Resources
  5. Stavros Niarchos Foundation
  6. Leon Levy Foundation
  7. NIMH [R03MH096321, K23MH087770]
  8. Function BIRN [U24-RR021992]

Ask authors/readers for more resources

Privacy concerns and other constraints may prevent data pooling for analysis at a single site. To address this, we propose two algorithms, dSNE and DP-dSNE, which use differential privacy to protect data privacy and metrics to evaluate algorithm performance.
Privacy concerns for rare disease data, institutional or IRB policies, access to local computational or storage resources or download capabilities are among the reasons that may preclude analyses that pool data to a single site. A growing number of multisite projects and consortia were formed to function in the federated environment to conduct productive research under constraints of this kind. In this scenario, a quality control tool that visualizes decentralized data in its entirety via global aggregation of local computations is especially important, as it would allow the screening of samples that cannot be jointly evaluated otherwise. To solve this issue, we present two algorithms: decentralized data stochastic neighbor embedding, dSNE, and its differentially private counterpart, DP-dSNE. We leverage publicly available datasets to simultaneously map data samples located at different sites according to their similarities. Even though the data never leaves the individual sites, dSNE does not provide any formal privacy guarantees. To overcome that, we rely on differential privacy: a formal mathematical guarantee that protects individuals from being identified as contributors to a dataset. We implement DP-dSNE with AgdaCliP, a method recently proposed to add less noise to the gradients per iteration. We introduce metrics for measuring the embedding quality and validate our algorithms on these metrics against their centralized counterpart on two toy datasets. Our validation on six multisite neuroimaging datasets shows promising results for the quality control tasks of visualization and outlier detection, highlighting the potential of our private, decentralized visualization approach.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available