4.5 Article

UACENet: Uncertain area attention and cross-image context extraction network for polyp segmentation

出版社

WILEY
DOI: 10.1002/ima.22906

关键词

attention mechanism; context feature learning; deep learning; polyp segmentation

向作者/读者索取更多资源

Accurate segmentation of polyps from colonoscopy images is crucial for early screening and diagnosis of colorectal cancer. This study proposes a novel network that combines uncertain area attention, cross-image context extraction, and adaptive fusion to improve polyp segmentation. The proposed method achieves state-of-the-art performance on multiple public datasets.
Accurately segmenting polyp from colonoscopy images is essential for early screening and diagnosis of colorectal cancer. In recent years, with the proposed encoder-decoder architecture, many advanced methods have been applied to this task and have achieved significant improvements. However, accurate segmentation of polyps has always been a challenging task due to the irregular shape and size of polyps, the low contrast between the polyp and the background in some images, and the influence of the environment such as illumination and mucus. In order to tackle these challenges, we propose a novel uncertain area attention and cross-image context extraction network for accurate polyp segmentation, which consists of the uncertain area attention module (UAAM), the cross-image context extraction module (CCEM), and the adaptive fusion module (AFM). UAAM is guided by the output prediction of the adjacent decoding layer, and focuses on the difficult region of the boundary without neglecting the attention to the background and foreground so that more edge details and uncertain information can be captured. CCEM innovatively captures multi-scale global context within an image and implicit contextual information between multiple images, fusing them to enhance the extraction of global location information. AFM fuses the local detail information extracted by UAAM and the global location information extracted by CCEM with the decoding layer feature for multiple fusion and adaptive attention to enhance feature representation. Our method is extensively experimented on four public datasets and generally achieves state-of-the-art performance compared to other advanced methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据