4.6 Article

Swin transformer for fast MRI

Journal

NEUROCOMPUTING
Volume 493, Issue -, Pages 281-304

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2022.04.051

Keywords

MRI reconstruction; Transformer; Compressed sensing; Parallel imaging

Funding

  1. UK Research and Inno-vation Future Leaders Fellowship [MR/V023799/1]
  2. Medical Research Council [MC/PC/21013]
  3. European Research Council Innovative Medicines Initiative [DRAGON,H2020-JTI-IMI2 101005122]
  4. AI for Health Imaging Award [H2020-SC1-FA-DTS-2019-1 952172]
  5. British Heart Foundation [TG/18/5/34111, PG/16/78/32402]
  6. NVIDIA Academic Hardware Grant Program [GJHZ20180926165402083]
  7. Project of Shenzhen International Cooper-ation Foundation [KK-2020/00049]
  8. Bas-que Government through the ELKARTEK funding program

Ask authors/readers for more resources

Magnetic resonance imaging (MRI) is an important non-invasive clinical tool. This study introduces SwinMR, a fast MRI reconstruction method based on Swin transformer, which achieves accelerated scanning process and high-quality reconstruction through k-space undersampling and deep learning.
Magnetic resonance imaging (MRI) is an important non-invasive clinical tool that can produce high -resolution and reproducible images. However, a long scanning time is required for high-quality MR images, which leads to exhaustion and discomfort of patients, inducing more artefacts due to voluntary movements of the patients and involuntary physiological movements. To accelerate the scanning process, methods by k-space undersampling and deep learning based reconstruction have been popularised. This work introduced SwinMR, a novel Swin transformer based method for fast MRI reconstruction. The whole network consisted of an input module (IM), a feature extraction module (FEM) and an output module (OM). The IM and OM were 2D convolutional layers and the FEM was composed of a cascaded of residual Swin transformer blocks (RSTBs) and 2D convolutional layers. The RSTB consisted of a series of Swin transformer layers (STLs). The shifted windows multi-head self-attention (W-MSA/SW-MSA) of STL was performed in shifted windows rather than the multi-head self-attention (MSA) of the original trans -former in the whole image space. A novel multi-channel loss was proposed by using the sensitivity maps, which was proved to reserve more textures and details. We performed a series of comparative studies and ablation studies in the Calgary-Campinas public brain MR dataset and conducted a downstream seg-mentation experiment in the Multi-modal Brain Tumour Segmentation Challenge 2017 dataset. The results demonstrate our SwinMR achieved high-quality reconstruction compared with other benchmark methods, and it shows great robustness with different undersampling masks, under noise interruption and on different datasets. The code is publicly available at https://github.com/ayanglab/SwinMR.(c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available