4.6 Article

Automatic Lung Segmentation on Chest X-rays Using Self-Attention Deep Neural Network

期刊

SENSORS
卷 21, 期 2, 页码 -

出版社

MDPI
DOI: 10.3390/s21020369

关键词

deep learning; medical image; attention module; image segmentation; lung segmentation

资金

  1. Kyonggi University

向作者/读者索取更多资源

This study introduces a deep learning-based method to segment lung areas in chest X-ray images, utilizing self-attention modules to accurately capture key regions in feature maps. Experimental results demonstrate that adding attention modules in lower layers of U-Net can improve the performance of lung area segmentation.
Accurate identification of the boundaries of organs or abnormal objects (e.g., tumors) in medical images is important in surgical planning and in the diagnosis and prognosis of diseases. In this study, we propose a deep learning-based method to segment lung areas in chest X-rays. The novel aspect of the proposed method is the self-attention module, where the outputs of the channel and spatial attention modules are combined to generate attention maps, with each highlighting those regions of feature maps that correspond to what and where to attend in the learning process, respectively. Thereafter, the attention maps are multiplied element-wise with the input feature map, and the intermediate results are added to the input feature map again for residual learning. Using X-ray images collected from public datasets for training and evaluation, we applied the proposed attention modules to U-Net for segmentation of lung areas and conducted experiments while changing the locations of the attention modules in the baseline network. The experimental results showed that our method achieved comparable or better performance than the existing medical image segmentation networks in terms of Dice score when the proposed attention modules were placed in lower layers of both the contracting and expanding paths of U-Net.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据