4.7 Article

Dual-modality brain PET-CT image segmentation based on adaptive use of functional and anatomical information

Journal

COMPUTERIZED MEDICAL IMAGING AND GRAPHICS
Volume 36, Issue 1, Pages 47-53

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.compmedimag.2011.06.004

Keywords

Image segmentation; Medical image analyze; Dual-modality medical imaging; Functional PET imaging

Funding

  1. ARC
  2. PolyU

Ask authors/readers for more resources

Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CTvoxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. (C) 2011 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available