4.7 Article

DRFL: Federated Learning in Diabetic Retinopathy Grading Using Fundus Images

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2023.3264473

Keywords

Feature extraction; Databases; Lesions; Biomedical imaging; Computational modeling; Diabetes; Retina; Diabetic retinopathy; deep learning; fundus image; federated learning; preprocessing; retina

Ask authors/readers for more resources

Diabetic retinopathy is a complication of diabetes that impairs vision. Early detection and treatment are crucial to prevent vision loss. Manual grading is time-consuming and prone to errors, and protecting patient data privacy is a challenge. Therefore, this study proposes a novel technique based on federated learning to evaluate the severity of diabetic retinopathy while ensuring patient data privacy.
Diabetic retinopathy (DR) is a complication of diabetic Mellitus, developing retinal lesions that impair vision. The DR detection in the early stages avoids permanent vision loss. The treatments provide relief, and the vision loss due to DR is irreversible. The manual grading of DR is time-consuming and prone to human errors. The other real-time problem is exchanging patient fundus image information with hospitals worldwide while upholding the organisations' privacy concerns. When training a deep learning (DL) network, two critical factors to keep in mind are creating a collaborative platform and protecting patient data privacy. Therefore, an automated DR detection technique is required while protecting patient data and privacy. In this work, we propose a novel DR severity grading technique based on Federated Learning (FL), a recent advancement in DL called DRFL. FL is a new research paradigm that allows DL models to be trained collectively without disclosing clinical information. In DRFL, we combined the Federated averaging (FedAvg) technique and the median of the categorical cross-entropy loss. Since in comparison to FedAvg, the median cross-entropy is better suited for either under-fitted or over-fitted clients. Also, we propose a novel central server that extracts multi-scale features from the fundus images to identify small lesions present in the fundus image. In this work, we consider five clients holding different preprocessed fundus images collected from publicly available databases such as MESSIDOR-2, IDRiD, Kaggle, and a local database collected from Silchar Medical College and Hospital. The proposed model obtained an accuracy of 98.6%, specificity of 99.3%, precision of 97.25%, and an F1 score of 97.5%, which are better results than other techniques.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available