4.6 Article

VG-DropDNet a Robust Architecture for Blood Vessels Segmentation on Retinal Image

Journal

IEEE ACCESS
Volume 10, Issue -, Pages 92067-92083

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3202890

Keywords

Computer architecture; Image segmentation; Retina; Blood vessels; Sensitivity; Neurons; Medical diagnostic imaging; Blood vessels; DenseNet; retinal image; segmentation; U-Net; VG-DropDNet

Funding

  1. Computation Laboratory, Mathematic and Natural Science Faculty, Universitas Sriwijaya

Ask authors/readers for more resources

This study proposes a VG-DropDNet architecture that combines VGG, DenseNet, and U-Net for blood vessels retinal segmentation. The experimental results on DRIVE and STARE datasets show that VG-DropDNet performs well and is robust in blood vessel segmentation.
Additional layers to the U-Net architecture leads to additional parameters and network complexity. The Visual Geometry Group (VGG) architecture with 16 backbones can overcome the problem with small convolutions. Dense Connected (DenseNet) can be used to avoid excessive feature learning in VGG by directly connecting each layer using input from the previous feature map. Adding a Dropout layer can protect DenseNet from Overfitting problems. This study proposes a VG-DropDNet architecture that combines VGG, DenseNet, and U-Net with a dropout layer in blood vessels retinal segmentation. VG-DropDNet is applied to Digital Retina Image for Vessel Extraction (DRIVE) and Retina Structured Analysis (STARE) datasets. The results on DRIVE give great accuracy of 95.36%, sensitivity of 79.74% and specificity of 97.61%. The F1-score on DRIVE of 0.8144 indicates that VG-DropDNet has great precision and recall. The IoU result is 68.70. It concludes that the resulting image of VG-DropDNet has a great resemblance to its ground truth. The results on STARE are excellent for accuracy of 98.56%, sensitivity of 91.24%, specificity of 92.99% and IoU of 86.90%. The results of the VGG-DropDNet on STARE show that the proposed method is excellent and robust for blood vessels retinal segmentation. The Cohen's Kappa coefficient obtained by VG-DropDNet at DRIVe is 0.8386 and at STARE is 0.98, it explains that the VG-DropDNet results are consistent and precise in both datasets. The results on various datasets indicate that VG-DropDnet is effective, robust and stable in retinal image blood vessel segmentation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available