3.8 Proceedings Paper

FCN-Transformer Feature Fusion for Polyp Segmentation

期刊

出版社

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-12053-4_65

关键词

Polyp segmentation; Medical image processing; Deep learning

资金

  1. Science and Technology Facilities Council [ST/S005404/1]

向作者/读者索取更多资源

The article introduces the importance of colonoscopy in the early detection of colorectal cancer and the necessity of using deep learning for automated polyp segmentation. The authors propose a new architecture that leverages transformers for feature extraction and compensates for full-size prediction using a convolutional branch. The authors demonstrate the state-of-the-art performance and generalization ability of their method.
Colonoscopy is widely recognised as the gold standard procedure for the early detection of colorectal cancer (CRC). Segmentation is valuable for two significant clinical applications, namely lesion detection and classification, providing means to improve accuracy and robustness. The manual segmentation of polyps in colonoscopy images is time-consuming. As a result, the use of deep learning (DL) for automation of polyp segmentation has become important. However, DL-based solutions can be vulnerable to overfitting and the resulting inability to generalise to images captured by different colonoscopes. Recent transformer-based architectures for semantic segmentation both achieve higher performance and generalise better than alternatives, however typically predict a segmentation map of h/4 x w/4 spatial dimensions for a h x w input image. To this end, we propose a new architecture for full-size segmentation which leverages the strengths of a transformer in extracting the most important features for segmentation in a primary branch, while compensating for its limitations in full-size prediction with a secondary fully convolutional branch. The resulting features from both branches are then fused for final prediction of a h x w segmentation map. We demonstrate our method's state-of-the-art performance with respect to the mDice, mIoU, mPrecision, and mRecall metrics, on both the Kvasir-SEG and CVC-ClinicDB dataset benchmarks. Additionally, we train the model on each of these datasets and evaluate on the other to demonstrate its superior generalisation performance. Code available: https://github.com/CVML-UCLan/FCBFormer.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据