Journal
INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY
Volume 32, Issue 1, Pages 12-25Publisher
WILEY
DOI: 10.1002/ima.22672
Keywords
COVID-19; deep learning; pneumonia; segmentation; X-ray CT
Funding
- Swiss National Science Foundation [SNRF 320030_176052]
- WOA Institution: Universite de Geneve
- Blended DEAL: CSAL
Ask authors/readers for more resources
This study proposes an automated deep learning method for whole lung and COVID-19 pneumonia lesion detection and segmentation. By evaluating on an external dataset, the accuracy of lung and lesion segmentation was high, with various quantitative metrics used to assess the segmentation performance.
We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347 ' 259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7 ' 333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 +/- 0.011 (95% CI, 0.98-0.99) and 0.91 +/- 0.038 (95% CI, 0.90-0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 +/- 0.84% (95% CI, -0.12 to 0.18) and -0.18 +/- 3.4% (95% CI, -0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 +/- 1.2% (95% CI, 0.16-0.59) and 0.81 +/- 6.6% (95% CI, -0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the range first-order feature (-6.95%) and least axis length shape feature (8.68%) for lesions. We developed an automated DL-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients to provide fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and quantification.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available