4.6 Article

Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network

Journal

PHYSICS IN MEDICINE AND BIOLOGY
Volume 64, Issue 1, Pages -

Publisher

IOP Publishing Ltd
DOI: 10.1088/1361-6560/aaf44b

Keywords

feature fusion; co-segmentation; multi-modality imaging; deep learning

Funding

  1. National Natural Science Foundation of China [61375018, 61672253]
  2. NIH/NCI [R01 CA172638]
  3. NIH/NCI Cancer Center Support Grant [P30 CA008748]
  4. NATIONAL CANCER INSTITUTE [P30CA008748, R01CA172638] Funding Source: NIH RePORTER

Ask authors/readers for more resources

Automatic tumor segmentation from medical images is an important step for computer-aided cancer diagnosis and treatment. Recently, deep learning has been successfully applied to this task, leading to state-of-the-art performance. However, most of existing deep learning segmentation methods only work for a single imaging modality. PET/CT scanner is nowadays widely used in the clinic, and is able to provide both metabolic information and anatomical information through integrating PET and CT into the same utility. In this study, we proposed a novel multi-modality segmentation method based on a 3D fully convolutional neural network (FCN), which is capable of taking account of both PET and CT information simultaneously for tumor segmentation. The network started with a multi-task training module, in which two parallel sub-segmentation architectures constructed using deep convolutional neural networks (CNNs) were designed to automatically extract feature maps from PET and CT respectively. A feature fusion module was subsequently designed based on cascaded convolutional blocks, which re-extracted features from PET/CT feature maps using a weighted cross entropy minimization strategy. The tumor mask was obtained as the output at the end of the network using a softmax function. The effectiveness of the proposed method was validated on a dinic PET/ CT dataset of 84 patients with lung cancer. The results demonstrated that the proposed network was effective, fast and robust and achieved significantly performance gain over CNN-based methods and traditional methods using PET or CT only, two V-net based co-segmentation methods, two variational co-segmentation methods based on fuzzy set theory and a deep learning co-segmentation method using W-net.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available