4.7 Article

Self-supervised multimodal reconstruction pre-training for retinal computer-aided diagnosis

Journal

EXPERT SYSTEMS WITH APPLICATIONS
Volume 185, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2021.115598

Keywords

Deep learning; Medical imaging; Self-supervised learning; Eye fundus; Transfer learning; Computer-aided diagnosis

Funding

  1. Instituto de Salud Carlos III, Government of Spain [DTS18/00136]
  2. European Regional Development Fund (ERDF) of the European Union (EU) [DTS18/00136]
  3. Ministerio de Ciencia e Innovacion, Government of Spain [RTI2018-095894-B-I00, PID2019-108435RB-I00]
  4. Xunta de Galicia [ED481A-2017/328]
  5. European Social Fund (ESF) of the EU [ED481A-2017/328]
  6. Conselleria de Cultura, Educacion e Universidade, Xunta de Galicia, through Grupos de Referencia Competitiva [ED431C 2020/24]
  7. Conselleria de Educacion, Universidade e Formacion Profesional [ED431G 2019/01]
  8. Xunta de Galicia, through the ERDF
  9. Secretaria Xeral de Universidades

Ask authors/readers for more resources

This study proposes a self-supervised learning method using unlabeled multimodal data to enhance the accuracy of retinal computer-aided diagnosis systems, without relying on manual annotation. Experimental results demonstrate satisfactory performance in diagnosing different ocular diseases, showcasing the potential of leveraging unlabeled multimodal visual data in the medical field.
Computer-aided diagnosis using retinal fundus images is crucial for the early detection of many ocular and systemic diseases. Nowadays, deep learning-based approaches are commonly used for this purpose. However, training deep neural networks usually requires a large amount of annotated data, which is not always available. In practice, this issue is commonly mitigated with different techniques, such as data augmentation or transfer learning. Nevertheless, the latter is typically faced using networks that were pre-trained on additional annotated data. An emerging alternative to the traditional transfer learning source tasks is the use of self-supervised tasks that do not require manually annotated data for training. In that regard, we propose a novel self-supervised visual learning strategy for improving the retinal computer-aided diagnosis systems using unlabeled multimodal data. In particular, we explore the use of a multimodal reconstruction task between complementary retinal imaging modalities. This allows to take advantage of existent unlabeled multimodal data in the medical domain, improving the diagnosis of different ocular diseases with additional domain-specific knowledge that does not rely on manual annotation. To validate and analyze the proposed approach, we performed several experiments aiming at the diagnosis of different diseases, including two of the most prevalent impairing ocular disorders: glaucoma and age-related macular degeneration. Additionally, the advantages of the proposed approach are clearly demonstrated in the comparisons that we perform against both the common fully-supervised approaches in the literature as well as current self-supervised alternatives for retinal computer-aided diagnosis. In general, the results show a satisfactory performance of our proposal, which improves existing alternatives by leveraging the unlabeled multimodal visual data that is commonly available in the medical field.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available