4.7 Article

ResGANet: Residual group attention network for medical image classification and segmentation

Journal

MEDICAL IMAGE ANALYSIS
Volume 76, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.media.2021.102313

Keywords

Deep learning; Medical image analysis; Residual group attention network; Image classification; Image segmentation

Funding

  1. National Natural Science Foundation of China [62162058]
  2. Science and Technology Department of Xinjiang Uyghur Autonomous Region [2020E0234]
  3. Xinjiang Autonomous Region key research and development project [2021B03001-4]

Ask authors/readers for more resources

Deep learning has shown superior performance in medical image analysis, and the proposed ResGANet model outperforms state-of-the-art backbone models in medical image tasks, providing a promising method for enhancing the feature representation of convolutional neural networks (CNNs) in the future.
In recent years, deep learning technology has shown superior performance in different fields of medical image analysis. Some deep learning architectures have been proposed and used for computational pathology classification, segmentation, and detection tasks. Due to their simple, modular structure, most downstream applications still use ResNet and its variants as the backbone network. This paper proposes a modular group attention block that can capture feature dependencies in medical images in two independent dimensions: channel and space. By stacking these group attention blocks in ResNet-style, we obtain a new ResNet variant called ResGANet. The stacked ResGANet architecture has 1.51-3.47 times fewer parameters than the original ResNet and can be directly used for downstream medical image segmentation tasks. Many experiments show that the proposed ResGANet is superior to state-of-the-art backbone models in medical image classification tasks. Applying it to different segmentation networks can improve the baseline model in medical image segmentation tasks without changing the network architecture. We hope that this work provides a promising method for enhancing the feature representation of convolutional neural networks (CNNs) in the future. (c) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available