4.6 Article

MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning

Journal

PHYSICS IN MEDICINE AND BIOLOGY
Volume 64, Issue 2, Pages -

Publisher

IOP PUBLISHING LTD
DOI: 10.1088/1361-6560/aaf5e0

Keywords

MRI; PET/MRI; attenuation correction; machine learning

Funding

  1. National Cancer Institute of the National Institutes of Health [R01CA215718]
  2. Department of Defense (DoD) Prostate Cancer Research Program (PCRP) Award [W81XWH-13-1-0269]
  3. Emory Winship Pilot Grant

Ask authors/readers for more resources

Deriving accurate attenuation maps for PET/MRI remains a challenging problem because MRI voxel intensities are not related to properties of photon attenuation and bone/air interfaces have similarly low signal. This work presents a learning-based method to derive patient-specific computed tomography (CT) maps from routine T1-weighted MRI in their native space for attenuation correction of brain PET. We developed a machine-learning-based method using a sequence of alternating random forests under the framework of an iterative refinement model. Anatomical feature selection is included in both training and predication stages to achieve optimal performance. To evaluate its accuracy, we retrospectively investigated 17 patients, each of which has been scanned by PET/CT and MR for brain. The PET images were corrected for attenuation on CT images as ground truth, as well as on pseudo CT (PCT) images generated from MR images. The PCT images showed mean average error of 66.1 +/- 8.5 HU, average correlation coefficient of 0.974 +/- 0.018 and average Dice similarity coefficient (DSC) larger than 0.85 for air, bone and soft tissue. The side-by-side image comparisons and joint histograms demonstrated very good agreement of PET images after correction by PCT and CT. The mean differences of voxel values in selected VOIs were less than 4%, the mean absolute difference of all active area is around 2.5%, and the mean linear correlation coefficient is 0.989 +/- 0.017 between PET images corrected by CT and PCT. This work demonstrates a novel learning-based approach to automatically generate CT images from routine T1-weighted MR images based on a random forest regression with patch-based anatomical signatures to effectively capture the relationship between the CT and MR images. Reconstructed PET images using the PCT exhibit errors well below accepted test/retest reliability of PET/CT indicating high quantitative equivalence.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available