4.7 Article

3D Segmentation Guided Style-Based Generative Adversarial Networks for PET Synthesis

期刊

IEEE TRANSACTIONS ON MEDICAL IMAGING
卷 41, 期 8, 页码 2092-2104

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMI.2022.3156614

关键词

Image segmentation; Task analysis; Positron emission tomography; Modulation; Three-dimensional displays; Image synthesis; Generators; PET; GAN; style modulation; task-driven; segmentation

资金

  1. National Science and Technology Major Project of the Ministry of Science and Technology in China [2017YFC0110903]
  2. National Natural Science Foundation of China [62022010, 81771910]
  3. SinoUnion Healthcare Inc., under the eHealth Program
  4. Fundamental Research Funds for the Central Universities of China from the State Key Laboratory of Software Development Environment in Beihang University in China
  5. 111 Project in China [B13003]
  6. High Performance Computing (HPC) Resources at Beihang University

向作者/读者索取更多资源

This study presents a novel segmentation guided style-based generative adversarial network (SGSGAN) for PET synthesis, which utilizes a style-based generator with style modulation to control hierarchical features in the translation process to generate images with more realistic textures, and adopts a task-driven strategy by combining a segmentation task with a generative adversarial network (GAN) framework to enhance translation performance. Extensive experiments demonstrate the superiority of the overall framework in PET synthesis, especially on regions of interest.
Potential radioactive hazards in full-dose positron emission tomography (PET) imaging remain a concern, whereas the quality of low-dose images is never desirable for clinical use. So it is of great interest to translate low-dose PET images into full-dose. Previous studies based on deep learning methods usually directly extract hierarchical features for reconstruction. We notice that the importance of each feature is different and they should be weighted dissimilarly so that tiny information can be captured by the neural network. Furthermore, the synthesis on some regions of interest is important in some applications. Here we propose a novel segmentation guided style-based generative adversarial network (SGSGAN) for PET synthesis. (1) We put forward a style-based generator employing style modulation, which specifically controls the hierarchical features in the translation process, to generate images with more realistic textures. (2) We adopt a task-driven strategy that couples a segmentation task with a generative adversarial network (GAN) framework to improve the translation performance. Extensive experiments show the superiority of our overall framework in PET synthesis, especially on those regions of interest.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据