4.6 Article

Improved 3D tumour definition and quantification of uptake in simulated lung tumours using deep learning

期刊

PHYSICS IN MEDICINE AND BIOLOGY
卷 67, 期 9, 页码 -

出版社

IOP Publishing Ltd
DOI: 10.1088/1361-6560/ac65d6

关键词

PET; CNN; quantification

资金

  1. European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant [764458]
  2. Wellcome/EPSRC Centre for Medical Engineering [WT203 148/Z/16/Z]
  3. National Institute for Health Research (NIHR) Biomedical Research Centre based at Guy's and St Thomas'NHSFoundation Trust
  4. King's College London
  5. Cancer ResearchUKNational Cancer Imaging Translational Accelerator Award [C4278/A27066]

向作者/读者索取更多资源

This study presents a deep learning approach to improve the quantification of lung tumour radiotracer uptake and tumour shape definition in PET imaging. The network trained with simulated tumour data shows better estimates in reconstructed PET images.
Objective. In clinical positron emission tomography (PET) imaging, quantification of radiotracer uptake in tumours is often performed using semi-quantitative measurements such as the standardised uptake value (SUV). For small objects, the accuracy of SUV estimates is limited by the noise properties of PET images and the partial volume effect. There is need for methods that provide more accurate and reproducible quantification of radiotracer uptake. Approach. In this work, we present a deep learning approach with the aim of improving quantification of lung tumour radiotracer uptake and tumour shape definition. A set of simulated tumours, assigned with 'ground truth' radiotracer distributions, are used to generate realistic PET raw data which are then reconstructed into PET images. In this work, the ground truth images are generated by placing simulated tumours characterised by different sizes and activity distributions in the left lung of an anthropomorphic phantom. These images are then used as input to an analytical simulator to simulate realistic raw PET data. The PET images reconstructed from the simulated raw data and the corresponding ground truth images are used to train a 3D convolutional neural network. Results. When tested on an unseen set of reconstructed PET phantom images, the network yields improved estimates of the corresponding ground truth. The same network is then applied to reconstructed PET data generated with different point spread functions. Overall the network is able to recover better defined tumour shapes and improved estimates of tumour maximum and median activities. Significance. Our results suggest that the proposed approach, trained on data simulated with one scanner geometry, has the potential to restore PET data acquired with different scanners.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据