4.2 Review

A Radiology-focused Review of Predictive Uncertainty for AI Interpretability in Computer-assisted Segmentation

期刊

出版社

RADIOLOGICAL SOC NORTH AMERICA (RSNA)
DOI: 10.1148/ryai.2021210031

关键词

Segmentation; Quantification; Ethics; Bayesian Network (BN)

资金

  1. Defence Research and Development Canada via the Innovation for Defence Excellence and Security program [CFPMN2-017-McMaster]

向作者/读者索取更多资源

Recent advances in computer hardware, software tools, and digital data archives have led to rapid development of artificial intelligence applications. However, a lack of trust from clinicians in predictive models poses a challenge to the clinical implementation of AI. This review discusses interpretability methods for deep learning and the state-of-the-art methods for predictive uncertainty estimation, emphasizing how uncertainty can enhance predictive performance, model interpretability, and trust-building in AI applications.
The recent advances and availability of computer hardware, software tools, and massive digital data archives have enabled the rapid development of artificial intelligence (AI) applications. Concerns over whether AI tools can communicate decisions to radiologists and primary care physicians is of particular importance because automated clinical decisions can substantially impact patient outcome. A challenge facing the clinical implementation of AI stems from the potential lack of trust clinicians have in these predictive models. This review will expand on the existing literature on interpretability methods for deep learning and review the state-of-the-art methods for predictive uncertainty estimation for computer-assisted segmentation tasks. Last, we discuss how uncertainty can improve predictive performance and model interpretability and can act as a tool to help foster trust. (C) RSNA, 2021.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.2
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据