3.8 Proceedings Paper

FCN-Transformer Feature Fusion for Polyp Segmentation

Journal

MEDICAL IMAGE UNDERSTANDING AND ANALYSIS, MIUA 2022
Volume 13413, Issue -, Pages 892-907

Publisher

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-12053-4_65

Keywords

Polyp segmentation; Medical image processing; Deep learning

Funding

  1. Science and Technology Facilities Council [ST/S005404/1]

Ask authors/readers for more resources

The article introduces the importance of colonoscopy in the early detection of colorectal cancer and the necessity of using deep learning for automated polyp segmentation. The authors propose a new architecture that leverages transformers for feature extraction and compensates for full-size prediction using a convolutional branch. The authors demonstrate the state-of-the-art performance and generalization ability of their method.
Colonoscopy is widely recognised as the gold standard procedure for the early detection of colorectal cancer (CRC). Segmentation is valuable for two significant clinical applications, namely lesion detection and classification, providing means to improve accuracy and robustness. The manual segmentation of polyps in colonoscopy images is time-consuming. As a result, the use of deep learning (DL) for automation of polyp segmentation has become important. However, DL-based solutions can be vulnerable to overfitting and the resulting inability to generalise to images captured by different colonoscopes. Recent transformer-based architectures for semantic segmentation both achieve higher performance and generalise better than alternatives, however typically predict a segmentation map of h/4 x w/4 spatial dimensions for a h x w input image. To this end, we propose a new architecture for full-size segmentation which leverages the strengths of a transformer in extracting the most important features for segmentation in a primary branch, while compensating for its limitations in full-size prediction with a secondary fully convolutional branch. The resulting features from both branches are then fused for final prediction of a h x w segmentation map. We demonstrate our method's state-of-the-art performance with respect to the mDice, mIoU, mPrecision, and mRecall metrics, on both the Kvasir-SEG and CVC-ClinicDB dataset benchmarks. Additionally, we train the model on each of these datasets and evaluate on the other to demonstrate its superior generalisation performance. Code available: https://github.com/CVML-UCLan/FCBFormer.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available