4.7 Article

An effective CNN and Transformer complementary network for medical image segmentation

Journal

PATTERN RECOGNITION
Volume 136, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2022.109228

Keywords

Transformer; Medical image segmentation; Feature complementary module; Cross -domain fusion; Convolutional Neural Network

Ask authors/readers for more resources

This paper proposes a CNN and Transformer Complementary Network (CTC-Net) for medical image segmentation. It designs two encoders to produce complementary features in Transformer and CNN domains, and uses cross-domain fusion, feature correlation, and dual attention to enhance the representation ability of features. It also incorporates skip connections to extract spatial details, contextual semantics, and long-range information.
The Transformer network was originally proposed for natural language processing. Due to its powerful representation ability for long-range dependency, it has been extended for vision tasks in recent years. To fully utilize the advantages of Transformers and Convolutional Neural Networks (CNNs), we propose a CNN and Transformer Complementary Network (CTC -Net) for medical image segmentation. We first de-sign two encoders by Swin Transformers and Residual CNNs to produce complementary features in Trans-former and CNN domains, respectively. Then we cross-wisely concatenate these complementary features to propose a Cross-domain Fusion Block (CFB) for effectively blending them. In addition, we compute the correlation between features from the CNN and Transformer domains, and apply channel attention to the self-attention features by Transformers for capturing dual attention information. We incorporate cross-domain fusion, feature correlation and dual attention together to propose a Feature Complementary Module (FCM) for improving the representation ability of features. Finally, we design a Swin Transformer decoder to further improve the representation ability of long-range dependencies, and propose to use skip connections between the Transformer decoded features and the complementary features for extract-ing spatial details, contextual semantics and long-range information. Skip connections are performed in different levels for enhancing multi-scale invariance. Experimental results show that our CTC -Net signifi-cantly surpasses the state-of-the-art image segmentation models based on CNNs, Transformers, and even Transformer and CNN combined models designed for medical image segmentation. It achieves superior performance on different medical applications, including multi-organ segmentation and cardiac segmen-tation. (c) 2022 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available