Journal
MAGNETIC RESONANCE IMAGING
Volume 63, Issue -, Pages 93-104Publisher
ELSEVIER SCIENCE INC
DOI: 10.1016/j.mri.2019.07.014
Keywords
MRI reconstruction; Dilated convolution; Residual learning; Multi-scale
Funding
- National Natural Science Foundation of China [61701245]
- Startup Foundation for Introducing Talent of NUIST [2243141701030]
- Priority Academic Program Development of Jiangsu Higher Education Institutions
Ask authors/readers for more resources
Magnetic resonance imaging (MRI) reconstruction is an active inverse problem which can be addressed by conventional compressed sensing (CS) MRI algorithms that exploit the sparse nature of MRI in an iterative optimization-based manner. However, two main drawbacks of iterative optimization-based CSMRI methods are time-consuming and are limited in model capacity. Meanwhile, one main challenge for recent deep learning-based CSMRI is the trade-off between model performance and network size. To address the above issues, we develop a new multi-scale dilated network for MRI reconstruction with high speed and outstanding performance. Comparing to convolutional kernels with same receptive fields, dilated convolutions reduce network parameters with smaller kernels and expand receptive fields of kernels to obtain almost same information. To maintain the abundance of features, we present global and local residual learnings to extract more image edges and details. Then we utilize concatenation layers to fuse multi-scale features and residual learnings for better reconstruction. Compared with several non-deep and deep learning CSMRI algorithms, the proposed method yields better reconstruction accuracy and noticeable visual improvements. In addition, we perform the noisy setting to verify the model stability, and then extend the proposed model on a MRI super-resolution task.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available