4.6 Article

Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung Tumor Segmentation

Journal

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
Volume 25, Issue 9, Pages 3507-3516

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JBHI.2021.3059453

Keywords

Tumors; Computed tomography; Image segmentation; Positron emission tomography; Feature extraction; Imaging; Sensitivity; Convolutional Neural Network (CNN); Multimodal Image Segmentation; Positron Emission Tomography; Computed Tomography (PET-CT)

Funding

  1. Australian Research Council (ARC) [DP170104304, IC170100022]

Ask authors/readers for more resources

Multimodal PET-CT combines PET's sensitivity for tumor detection with CT's anatomical information for cancer assessment. Existing automated segmentation methods are often ineffective, leading to manual segmentation by imaging experts which is labor-intensive and error-prone.
Multimodal positron emission tomography-computed tomography (PET-CT) is used routinely in the assessment of cancer. PET-CT combines the high sensitivity for tumor detection of PET and anatomical information from CT. Tumor segmentation is a critical element of PET-CT but at present, the performance of existing automated methods for this challenging task is low. Segmentation tends to be done manually by different imaging experts, which is labor-intensive and prone to errors and inconsistency. Previous automated segmentation methods largely focused on fusing information that is extracted separately from the PET and CT modalities, with the underlying assumption that each modality contains complementary information. However, these methods do not fully exploit the high PET tumor sensitivity that can guide the segmentation. We introduce a deep learning-based framework in multimodal PET-CT segmentation with a multimodal spatial attention module (MSAM). The MSAM automatically learns to emphasize regions (spatial areas) related to tumors and suppress normal regions with physiologic high-uptake from the PET input. The resulting spatial attention maps are subsequently employed to target a convolutional neural network (CNN) backbone for segmentation of areas with higher tumor likelihood from the CT image. Our experimental results on two clinical PET-CT datasets of non-small cell lung cancer (NSCLC) and soft tissue sarcoma (STS) validate the effectiveness of our framework in these different cancer types. We show that our MSAM, with a conventional U-Net backbone, surpasses the state-of-the-art lung tumor segmentation approach by a margin of 7.6% in Dice similarity coefficient (DSC).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available