4.3 Article

Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis

期刊

NEUROINFORMATICS
卷 16, 期 3-4, 页码 295-308

出版社

HUMANA PRESS INC
DOI: 10.1007/s12021-018-9370-4

关键词

Alzheimer's disease diagnosis; Multi-modality brain images; Convolutional neural networks (CNNs); Cascaded CNNs; Image classification

资金

  1. National Natural Science Foundation of China (NSFC) [61375112, 61773263, U1504606]
  2. National Key Research and Development Program of China [2016YFC0100903]
  3. SMC Excellent Young Faculty program of SJTU
  4. Alzheimer's Disease Neuroimaging Initiative (ADNI) (NIH) [U01 AG024904]
  5. National Institute on Aging
  6. National Institute of Biomedical Imaging and Bioengineering
  7. Abbott
  8. AstraZeneca AB
  9. Bayer Schering Pharma AG
  10. Bristol-Myers Squibb
  11. Eisai Global Clinical Development
  12. Elan Corporation
  13. Genentech
  14. GE Healthcare
  15. GlaxoSmithKline
  16. Innogenetics
  17. Johnson and Johnson
  18. Eli Lilly and Co.
  19. Medpace, Inc.
  20. Merck and Co., Inc.
  21. Novartis AG
  22. Pfizer Inc.
  23. F. Hoffman-La Roche
  24. Schering-Plough
  25. Synarc, Inc.
  26. Alzheimer's Association
  27. Alzheimer's Drug Discovery Foundation
  28. U.S. Food and Drug Administration

向作者/读者索取更多资源

Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据