4.7 Article

Uncertainty modelling in deep learning for safer neuroimage enhancement: Demonstration in diffusion MRI

期刊

NEUROIMAGE
卷 225, 期 -, 页码 -

出版社

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.neuroimage.2020.117366

关键词

Uncertainty quantification; Deep learning; Safety; Robustness; Interpretability; Super-resolution; Image enhancement; Image synthesis; Neuroimaging; Diffusion MRI; Tractography

资金

  1. MS Society UK
  2. UCL Hospitals Biomedical Research Centre
  3. NIH [1U54MH091657]
  4. Washington University
  5. EU Horizon 2020 grant [CDS-QuaMRI 634541-2]
  6. EPSRC [R014019, R006032, N018702, M020533, R006032/1, M020533/1]
  7. European Union's Horizon 2020 research and innovation programme [634541]
  8. Microsoft scholarship
  9. EPSRC [EP/L023067/1] Funding Source: UKRI

向作者/读者索取更多资源

This study introduces a method to improve the accuracy of medical image super-resolution problems and demonstrates the potential benefits of integrating uncertainty modeling into DL algorithms through detailed analysis and evaluation.
Deep learning (DL) has shown great potential in medical image enhancement problems, such as super-resolution or image synthesis. However, to date, most existing approaches are based on deterministic models, neglecting the presence of different sources of uncertainty in such problems. Here we introduce methods to characterise different components of uncertainty, and demonstrate the ideas using diffusion MRI super-resolution. Specifically, we propose to account for intrinsic uncertainty through a heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference, and integrate the two to quantify predictive uncertainty over the output image. Moreover, we introduce a method to propagate the predictive uncertainty on a multi-channelled image to derived scalar parameters, and separately quantify the effects of intrinsic and parameter uncertainty therein. The methods are evaluated for super-resolution of two different signal representations of diffusion MR images-Diffusion Tensor images and Mean Apparent Propagator MRI-and their derived quantities such as mean diffusivity and fractional anisotropy, on multiple datasets of both healthy and pathological human brains. Results highlight three key potential benefits of modelling uncertainty for improving the safety of DL-based image enhancement systems. Firstly, modelling uncertainty improves the predictive performance even when test data departs from training data (out-of-distribution datasets). Secondly, the predictive uncertainty highly correlates with reconstruction errors, and is therefore capable of detecting predictive failures. Results on both healthy subjects and patients with brain glioma or multiple sclerosis demonstrate that such an uncertainty measure enables subject-specific and voxel-wise risk assessment of the super-resolved images that can be accounted for in subsequent analysis. Thirdly, we show that the method for decomposing predictive uncertainty into its independent sources provides high-level explanations for the model performance by separately quantifying how much uncertainty arises from the inherent difficulty of the task or the limited training examples. The introduced concepts of uncertainty modelling extend naturally to many other imaging modalities and data enhancement applications.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据