4.6 Article

Improved 3D tumour definition and quantification of uptake in simulated lung tumours using deep learning

Journal

PHYSICS IN MEDICINE AND BIOLOGY
Volume 67, Issue 9, Pages -

Publisher

IOP Publishing Ltd
DOI: 10.1088/1361-6560/ac65d6

Keywords

PET; CNN; quantification

Funding

  1. European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant [764458]
  2. Wellcome/EPSRC Centre for Medical Engineering [WT203 148/Z/16/Z]
  3. National Institute for Health Research (NIHR) Biomedical Research Centre based at Guy's and St Thomas'NHSFoundation Trust
  4. King's College London
  5. Cancer ResearchUKNational Cancer Imaging Translational Accelerator Award [C4278/A27066]

Ask authors/readers for more resources

This study presents a deep learning approach to improve the quantification of lung tumour radiotracer uptake and tumour shape definition in PET imaging. The network trained with simulated tumour data shows better estimates in reconstructed PET images.
Objective. In clinical positron emission tomography (PET) imaging, quantification of radiotracer uptake in tumours is often performed using semi-quantitative measurements such as the standardised uptake value (SUV). For small objects, the accuracy of SUV estimates is limited by the noise properties of PET images and the partial volume effect. There is need for methods that provide more accurate and reproducible quantification of radiotracer uptake. Approach. In this work, we present a deep learning approach with the aim of improving quantification of lung tumour radiotracer uptake and tumour shape definition. A set of simulated tumours, assigned with 'ground truth' radiotracer distributions, are used to generate realistic PET raw data which are then reconstructed into PET images. In this work, the ground truth images are generated by placing simulated tumours characterised by different sizes and activity distributions in the left lung of an anthropomorphic phantom. These images are then used as input to an analytical simulator to simulate realistic raw PET data. The PET images reconstructed from the simulated raw data and the corresponding ground truth images are used to train a 3D convolutional neural network. Results. When tested on an unseen set of reconstructed PET phantom images, the network yields improved estimates of the corresponding ground truth. The same network is then applied to reconstructed PET data generated with different point spread functions. Overall the network is able to recover better defined tumour shapes and improved estimates of tumour maximum and median activities. Significance. Our results suggest that the proposed approach, trained on data simulated with one scanner geometry, has the potential to restore PET data acquired with different scanners.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available