4.7 Article

Prior wavelet knowledge for multi-modal medical image segmentation using a lightweight neural network with attention guided features

Journal

EXPERT SYSTEMS WITH APPLICATIONS
Volume 209, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2022.118166

Keywords

Medical image; Deep learning; Residual shuffle attention; Ultrasound image segmentation

Funding

  1. NIDDK, USA of the National Institutes of Health [R01DK119860]

Ask authors/readers for more resources

Medical image segmentation is crucial for diagnosing and staging diseases. We propose a robust and lightweight deep learning real-time segmentation network called MISegNet, which outperforms state-of-the-art methods on multiple datasets, demonstrating its versatility and effectiveness.
Medical image segmentation plays a crucial role in diagnosing and staging diseases. It facilitates image analysis and quantification in multiple applications, but building the right appropriate solutions is essential and highly reliant on the features of different datasets and computational resources. Most existing approaches provide segmentation for a specific anatomical region of interest and are limited to multiple imaging modalities in a clinical setting due to their generalizability with high computational requirements. To mitigate these issues, we propose a robust and lightweight deep learning real-time segmentation network for multi-modality medical images called MISegNet. We incorporate discrete wavelet transform (DWT) of the input to extract salient features in the frequency domain. This mechanism allows the neurons' receptive field to enlarge within the network. We propose a self-attention-based global context-aware (SGCA) module with varying dilation rates to enlarge the field of view and designate the importance of each scale that enhances the network's ability to discriminate features. We build a residual shuffle attention (RSA) mechanism to improve the feature representation of the proposed model and formulate a new boundary-aware loss function called Farid End Point Error (FEPE) that correctly segments regions with ambiguous boundaries for edge detection. We confirm the versatility of the proposed model by performing experiments against eleven state-of-the-art segmentation methods on four datasets of different organs, including two publicly available datasets (i.e., ISBI2017, and COVID-19 CT) and two private datasets (i.e., ovary and liver ultrasound images). Experimental results prove that the MISegNet with 1.5M parameters, outperforms the state-of-the-art methods by 1.5%-7% (i.e., dice coefficient score) with a corresponding 23x decrease in the number of parameters and multiply-accumulate operations respectively compared to U-Net.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available