4.7 Article

FAT-Net: Feature adaptive transformers for automated skin lesion segmentation

Journal

MEDICAL IMAGE ANALYSIS
Volume 76, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.media.2021.102327

Keywords

Feature adaptive transformer; Convolutional neural networks; Skin lesion segmentation; Memory-efficient decoder

Funding

  1. National Natural Science Foundation of China [61973221, 61871274, 61801305, 61872351, 81571758]
  2. Natural Science Foundation of Guangdong Province, China [2018A030313381, 2019A1515011165]
  3. COVID-19 Prevention Project of Guangdong Province, China [2020KZDZX1174, 2018AAA0102900]
  4. Shenzhen Key Basic Research Project [JCYJ20180507184647636, JCYJ20170413161913429, JCYJ20190808155618806]

Ask authors/readers for more resources

The study introduces a novel skin lesion segmentation method named FAT-Net, which integrates transformer branch and feature adaptation module to capture long-range dependencies and enhance feature fusion. Experimental results demonstrate the superior accuracy and inference speed of FAT-Net on four public datasets compared to state-of-the-art methods.
Skin lesion segmentation from dermoscopic image is essential for improving the quantitative analysis of melanoma. However, it is still a challenging task due to the large scale variations and irregular shapes of the skin lesions. In addition, the blurred lesion boundaries between the skin lesions and the surrounding tissues may also increase the probability of incorrect segmentation. Due to the inherent limitations of traditional convolutional neural networks (CNNs) in capturing global context information, traditional CNN-based methods usually cannot achieve a satisfactory segmentation performance. In this paper, we propose a novel feature adaptive transformer network based on the classical encoder-decoder architecture, named FAT-Net , which integrates an extra transformer branch to effectively capture long-range dependencies and global context information. Furthermore, we also employ a memory-efficient decoder and a feature adaptation module to enhance the feature fusion between the adjacent-level features by activating the effective channels and restraining the irrelevant background noise. We have performed extensive experiments to verify the effectiveness of our proposed method on four public skin lesion segmentation datasets, including the ISIC 2016, ISIC 2017, ISIC 2018, and PH2 datasets. Ablation studies demonstrate the effectiveness of our feature adaptive transformers and memory-efficient strategies. Comparisons with state-of-the-art methods also verify the superiority of our proposed FAT-Net in terms of both accuracy and inference speed. The code is available at https://github.com/SZUcsh/FAT-Net . (c) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available