4.7 Article

Bidirectional Mapping Generative Adversarial Networks for Brain MR to PET Synthesis

Journal

IEEE TRANSACTIONS ON MEDICAL IMAGING
Volume 41, Issue 1, Pages 145-157

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMI.2021.3107013

Keywords

Generators; Generative adversarial networks; Three-dimensional displays; Positron emission tomography; Training; Image synthesis; Computed tomography; Medical image synthesis; generative adversarial network; bidirectional mapping mechanism

Funding

  1. National Natural Science Foundations of China [62172403, 61872351]
  2. International Science and Technology Cooperation Projects of Guangdong [2019A050510030]
  3. Distinguished Young Scholars Fund of Guangdong [2021B1515020019]
  4. Excellent Young Scholars of Shenzhen [RCYX20200714114641211]
  5. Shenzhen Key Basic Research Project [JCYJ20200109115641762]

Ask authors/readers for more resources

This paper proposes a 3D end-to-end synthesis network called BMGAN for synthesizing PET images from brain MR images. The method utilizes a bidirectional mapping mechanism and employs a 3D Dense-UNet generator architecture and hybrid loss functions to generate high-quality cross-modality synthetic images while preserving the diverse brain structures of different subjects.
Fusing multi-modality medical images, such as magnetic resonance (MR) imaging and positron emission tomography (PET), can provide various anatomical and functional information about the human body. However, PET data is not always available for several reasons, such as high cost, radiation hazard, and other limitations. This paper proposes a 3D end-to-end synthesis network called Bidirectional Mapping Generative Adversarial Networks (BMGAN). Image contexts and latent vectors are effectively used for brain MR-to-PET synthesis. Specifically, a bidirectional mapping mechanism is designed to embed the semantic information of PET images into the high-dimensional latent space. Moreover, the 3D Dense-UNet generator architecture and the hybrid loss functions are further constructed to improve the visual quality of cross-modality synthetic images. The most appealing part is that the proposed method can synthesize perceptually realistic PET images while preserving the diverse brain structures of different subjects. Experimental results demonstrate that the performance of the proposed method outperforms other competitive methods in terms of quantitative measures, qualitative displays, and evaluation metrics for classification.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available