4.6 Article

A comparison of Monte Carlo dropout and bootstrap aggregation on the performance and uncertainty estimation in radiation therapy dose prediction with deep learning neural networks

期刊

PHYSICS IN MEDICINE AND BIOLOGY
卷 66, 期 5, 页码 -

出版社

IOP Publishing Ltd
DOI: 10.1088/1361-6560/abe04f

关键词

radiation therapy; dose prediction; deep learning; treatment planning; uncertainty; Monte Carlo dropout; bootstrap aggregation

资金

  1. National Institutes of Health (NIH) [R01CA237269]
  2. Cancer Prevention & Research Institute of Texas (CPRIT) [IIRA RP150485]

向作者/读者索取更多资源

The study proposes using MCDO and bagging techniques on DL models for uncertainty estimations in radiation therapy dose prediction, demonstrating that bagging significantly reduces loss value and errors, providing lower MAE compared to baseline models. Uncertainty estimations have no impact on performance time.
Recently, artificial intelligence technologies and algorithms have become a major focus for advancements in treatment planning for radiation therapy. As these are starting to become incorporated into the clinical workflow, a major concern from clinicians is not whether the model is accurate, but whether the model can express to a human operator when it does not know if its answer is correct. We propose to use Monte Carlo Dropout (MCDO) and the bootstrap aggregation (bagging) technique on deep learning (DL) models to produce uncertainty estimations for radiation therapy dose prediction. We show that both models are capable of generating a reasonable uncertainty map, and, with our proposed scaling technique, creating interpretable uncertainties and bounds on the prediction and any relevant metrics. Performance-wise, bagging provides statistically significant reduced loss value and errors in most of the metrics investigated in this study. The addition of bagging was able to further reduce errors by another 0.34% for D-mean and 0.19% for D-max, on average, when compared to the baseline model. Overall, the bagging framework provided significantly lower mean absolute error (MAE) of 2.62, as opposed to the baseline model's MAE of 2.87. The usefulness of bagging, from solely a performance standpoint, does highly depend on the problem and the acceptable predictive error, and its high upfront computational cost during training should be factored in to deciding whether it is advantageous to use it. In terms of deployment with uncertainty estimations turned on, both methods offer the same performance time of about 12 s. As an ensemble-based metaheuristic, bagging can be used with existing machine learning architectures to improve stability and performance, and MCDO can be applied to any DL models that have dropout as part of their architecture.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据