4.2 Article

A CNN Transfer Learning-Based Automated Diagnosis of COVID-19 From Lung Computerized Tomography Scan Slices

Journal

NEW GENERATION COMPUTING
Volume -, Issue -, Pages -

Publisher

SPRINGER
DOI: 10.1007/s00354-023-00232-3

Keywords

COVID-19; SARS-CoV-2; Wavelet transform; CT scan; Transfer learning; MobileNetV2

Ask authors/readers for more resources

This study proposes a CNN-based transfer learning approach for automatic identification of COVID-19 infection from lung CT images. The results show that the pre-trained MobileNetV2 model achieves good classification outcomes with 93.59% accuracy, 100% sensitivity, and 87.25% specificity.
Lung abnormality is becoming the most widespread illness in individuals of the entire age group. This ailment can occur because of several causes. Recently, the novel disease, widely known as COVID-19, originated from the severe acute respiratory syndrome coronavirus-2, that can be stated as an outbreak by the World Health Organization. Detecting COVID-19 in its early stage becomes crucial for suppressing the epidemic it has triggered. In this proposed work, a CNN-based transfer learning approach for the screening of the outbreak of COVID-19. The central principle of this approach is to develop a computerized framework to help medical organizations, mainly in regions where fewer skilled employees are available. The proposed work explores the potential of pre-trained model architectures for the automatic identification of COVID-19 infection from lung CT images. First, in the data preparation, discrete wavelet transform is applied for three-level image decomposition, and then wavelet-based denoising is implemented on the training data sample using the VisuShrink algorithm. Second, data augmentation is done by applying zoom, change in brightness, height-width shifting, shearing, and rotation operations. Thirdly, the work is implemented by implementing the fine-tuned modified MobileNetV2 model in which 80% of CT images have been preferred for model training purpose, and 20% of images are selected for validation purposes. The overall performance of the pre-trained models is estimated by calculating several parametric outcomes. The outcome of the investigational analysis proves that the MobileNetV2 pre-trained CNN model obtained improved classification outcomes with 93.59% accuracy, 100% sensitivity, 87.25% specificity, 88.59% precision, 93.95% F1-score, 100% NPV, and AUC of 93.62%. In addition, the comparison of various CNN models such as Xception, NASNetLarge, NASNetMobile, DenseNet121, DenseNet169, DenseNet201, InceptionV3, and InceptionResNetV2 have been considered for experimentation analysis.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available