4.6 Article

EfficientNetV2 Based Ensemble Model for Quality Estimation of Diabetic Retinopathy Images from DeepDRiD

Journal

DIAGNOSTICS
Volume 13, Issue 4, Pages -

Publisher

MDPI
DOI: 10.3390/diagnostics13040622

Keywords

diabetic retinopathy; quality estimation; DeepDRiD; EfficientNetV2; fundus image

Ask authors/readers for more resources

Diabetic retinopathy (DR) is a major complication of diabetes and can be identified from retinal fundus images. This study proposes an automated method for quality estimation (QE) of digital fundus images using an ensemble of state-of-the-art EfficientNetV2 deep neural network models. The ensemble method achieves a test accuracy of 75% on the Deep Diabetic Retinopathy Image Dataset (DeepDRiD), outperforming existing methods. This proposed method could serve as a potential tool for automated QE of fundus images and be useful for ophthalmologists.
Diabetic retinopathy (DR) is one of the major complications caused by diabetes and is usually identified from retinal fundus images. Screening of DR from digital fundus images could be time-consuming and error-prone for ophthalmologists. For efficient DR screening, good quality of the fundus image is essential and thereby reduces diagnostic errors. Hence, in this work, an automated method for quality estimation (QE) of digital fundus images using an ensemble of recent state-of-the-art EfficientNetV2 deep neural network models is proposed. The ensemble method was cross-validated and tested on one of the largest openly available datasets, the Deep Diabetic Retinopathy Image Dataset (DeepDRiD). We obtained a test accuracy of 75% for the QE, outperforming the existing methods on the DeepDRiD. Hence, the proposed ensemble method may be a potential tool for automated QE of fundus images and could be handy to ophthalmologists.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available