3.8 Proceedings Paper

Deep Multiview Learning to Identify Population Structure with Multimodal Imaging

Publisher

IEEE
DOI: 10.1109/BIBE50027.2020.00057

Keywords

Deep learning; multiview learning; deep generalized canonical correlation analysis; multimodal imaging; image-driven population structure

Funding

  1. National Institute of Health [R01 EB022574, R01 LM013463, RF1 AG063481]
  2. National Science Foundation [IIS 1837964]
  3. Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health) [U01 AG024904]
  4. DOD ADNI (Department of Defense) [W81XWH-12-2-0012]

Ask authors/readers for more resources

We present an effective deep multiview learning framework to identify population structure using multimodal imaging data. Our approach is based on canonical correlation analysis (CCA). We propose to use deep generalized CCA (DGCCA) to learn a shared latent representation of non-linearly mapped and maximally correlated components from multiple imaging modalities with reduced dimensionality. In our empirical study, this representation is shown to effectively capture more variance in original data than conventional generalized CCA (GCCA) which applies only linear transformation to the multi-view data. Furthermore, subsequent cluster analysis on the new feature set learned from DGCCA is able to identify a promising population structure in an Alzheimer's disease (AD) cohort. Genetic association analyses of the clustering results demonstrate that the shared representation learned from DGCCA yields a population structure with a stronger genetic basis than several competing feature learning methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available