Journal
BIOMEDICAL OPTICS EXPRESS
Volume 13, Issue 9, Pages 4870-4888Publisher
Optica Publishing Group
DOI: 10.1364/BOE.468483
Keywords
-
Funding
- National Eye Institute [P30 EY001792, R01 EY023522, R01 EY030101, R01EY029673, R01EY030842]
- Research to Prevent Blindness
- Richard and Loan Hill Endowment
Ask authors/readers for more resources
This study demonstrates the impact of multimodal fusion on deep learning artery-vein segmentation in optical coherence tomography and OCT angiography, and explores the characteristics used in this segmentation. The performance of multimodal architectures with OCT-OCTA fusion is compared to unimodal architectures with only OCT or OCTA inputs. The results show that both early and late fusion architectures perform competitively. Saliency maps are used to identify the characteristics in OCT and OCTA images for artery-vein segmentation.
This study is to demonstrate the effect of multimodal fusion on the performance of deep learning artery-vein (AV) segmentation in optical coherence tomography (OCT) and OCT angiography (OCTA); and to explore OCT/OCTA characteristics used in the deep learning AV segmentation. We quantitatively evaluated multimodal architectures with early and late OCT-OCTA fusions, compared to the unimodal architectures with OCT-only and OCTA-only inputs. The OCTA-only architecture, early OCT-OCTA fusion architecture, and late OCT-OCTA fusion architecture yielded competitive performances. For the 6 mmx6 mm and 3 mmx3 mm datasets, the late fusion architecture achieved an overall accuracy of 96.02% and 94.00%, slightly better than the OCTA-only architecture which achieved an overall accuracy of 95.76% and 93.79%. 6 mmx6 mm OCTA images show AV information at pre-capillary level structure, while 3 mmx3 mm OCTA images reveal AV information at capillary level detail. In order to interpret the deep learning performance, saliency maps were produced to identify OCT/OCTA image characteristics for AV segmentation. Comparative OCT and OCTA saliency maps support the capillary-free zone as one of the possible features for AV segmentation in OCTA. The deep learning network MF-AV-Net used in this study is available on GitHub for open access.(C) 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available