Journal
IEEE TRANSACTIONS ON MEDICAL IMAGING
Volume 41, Issue 8, Pages 2092-2104Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMI.2022.3156614
Keywords
Image segmentation; Task analysis; Positron emission tomography; Modulation; Three-dimensional displays; Image synthesis; Generators; PET; GAN; style modulation; task-driven; segmentation
Categories
Funding
- National Science and Technology Major Project of the Ministry of Science and Technology in China [2017YFC0110903]
- National Natural Science Foundation of China [62022010, 81771910]
- SinoUnion Healthcare Inc., under the eHealth Program
- Fundamental Research Funds for the Central Universities of China from the State Key Laboratory of Software Development Environment in Beihang University in China
- 111 Project in China [B13003]
- High Performance Computing (HPC) Resources at Beihang University
Ask authors/readers for more resources
This study presents a novel segmentation guided style-based generative adversarial network (SGSGAN) for PET synthesis, which utilizes a style-based generator with style modulation to control hierarchical features in the translation process to generate images with more realistic textures, and adopts a task-driven strategy by combining a segmentation task with a generative adversarial network (GAN) framework to enhance translation performance. Extensive experiments demonstrate the superiority of the overall framework in PET synthesis, especially on regions of interest.
Potential radioactive hazards in full-dose positron emission tomography (PET) imaging remain a concern, whereas the quality of low-dose images is never desirable for clinical use. So it is of great interest to translate low-dose PET images into full-dose. Previous studies based on deep learning methods usually directly extract hierarchical features for reconstruction. We notice that the importance of each feature is different and they should be weighted dissimilarly so that tiny information can be captured by the neural network. Furthermore, the synthesis on some regions of interest is important in some applications. Here we propose a novel segmentation guided style-based generative adversarial network (SGSGAN) for PET synthesis. (1) We put forward a style-based generator employing style modulation, which specifically controls the hierarchical features in the translation process, to generate images with more realistic textures. (2) We adopt a task-driven strategy that couples a segmentation task with a generative adversarial network (GAN) framework to improve the translation performance. Extensive experiments show the superiority of our overall framework in PET synthesis, especially on those regions of interest.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available