Journal
COMPUTERS IN BIOLOGY AND MEDICINE
Volume 146, Issue -, Pages -Publisher
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.compbiomed.2022.105628
Keywords
Medical image segmentation; Attention mechanism; Encoder-decoder network
Categories
Funding
- National Natural Science Foundation of China [8210072776]
- Science and Technology Innovation Committee of Shenzhen City [JCYJ20200109140820699, 20200925174052004]
- Guangdong Basic and Applied Fundamental Research Fund Committee [2021A1515012195]
- Guangdong Provincial Department of Education [2020ZDZX3043]
- Guangdong Provincial Key Laboratory [2020B121201001]
Ask authors/readers for more resources
This paper proposes a new attention module (FGAM) for medical image segmentation, which is simple, pluggable, and effective. It improves segmentation results by digging out the feature representation ability in the encoder and decoder features.
Medical image segmentation is fundamental for computer-aided diagnosis or surgery. Various attention modules are proposed to improve segmentation results, which exist some limitations for medical image segmentation, such as large computations, weak framework applicability, etc. To solve the problems, we propose a new attention module named FGAM, short for Feature Guided Attention Module, which is a simple but pluggable and effective module for medical image segmentation. The FGAM tries to dig out the feature representation ability in the encoder and decoder features. Specifically, the decoder shallow layer always contains abundant information, which is taken as a queryable feature dictionary in the FGAM. The module contains a parameter-free activator and can be deleted after various encoder-decoder networks' training. The efficacy of the FGAM is proved on various encoder-decoder models based on five datasets, including four publicly available datasets and one inhouse dataset.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available