4.6 Article

TBUnet: A Pure Convolutional U-Net Capable of Multifaceted Feature Extraction for Medical Image Segmentation

Journal

JOURNAL OF MEDICAL SYSTEMS
Volume 47, Issue 1, Pages -

Publisher

SPRINGER
DOI: 10.1007/s10916-023-02014-2

Keywords

Medical image segmentation; U-Net; Large kernel convolution; Multifaceted feature extraction; Feature fusion

Ask authors/readers for more resources

In this paper, a network model called TBUnet is proposed for medical image segmentation. TBUnet extracts high frequency, low frequency, and boundary information through three branches, and uses a fusion layer and a feature enhancement module to combine and emphasize features. Experiments demonstrate that TBUnet achieves excellent segmentation performance and generalization capability on different datasets.
Many current medical image segmentation methods utilize convolutional neural networks (CNNs), with some extended U-Net-based networks relying on deep feature representations to achieve satisfactory results. However, due to the limited receptive fields of convolutional architectures, they are unable to explicitly model the varying range dependencies present in medical images. Recently, advancements in large kernel convolution have allowed for the extraction of a wider range of low frequency information, making this task more achievable. In this paper, we propose TBUnet for solving the problem of difficult to accurately segment lesions with heterogeneous structures and fuzzy borders, such as melanoma, colon polyps and breast cancer. The TBUnet is a pure convolutional network with three branches for extracting high frequency information, low frequency information, and boundary information, respectively. It is capable of extracting features in various areas. To fuse the feature maps from the three branches, TBUnet presents the FL (fusion layer) module, which is based on threshold and logical operation. We design the FE (feature enhancement) module on the skip-connection to emphasize the fine-grained features. In addition, our method varies the number of input channels in different branches at each stage of the network, so that the relationship between low and high frequency features can be learned. TBUnet yields 91.08 DSC on ISIC-2018 for melanoma segmentation, and achieves better performance than state-of-the-art medical image segmentation methods. Furthermore, experimental results with 82.48 DSC and 89.04 DSC obtained on the BUSI dataset and the Kvasir-SEG dataset show that TBUnet outperforms the advanced segmentation methods. Experiments demonstrate that TBUnet has excellent segmentation performance and generalisation capability.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available