期刊
RADIOLOGY-ARTIFICIAL INTELLIGENCE
卷 3, 期 6, 页码 -出版社
RADIOLOGICAL SOC NORTH AMERICA (RSNA)
DOI: 10.1148/ryai.2021210031
关键词
Segmentation; Quantification; Ethics; Bayesian Network (BN)
资金
- Defence Research and Development Canada via the Innovation for Defence Excellence and Security program [CFPMN2-017-McMaster]
Recent advances in computer hardware, software tools, and digital data archives have led to rapid development of artificial intelligence applications. However, a lack of trust from clinicians in predictive models poses a challenge to the clinical implementation of AI. This review discusses interpretability methods for deep learning and the state-of-the-art methods for predictive uncertainty estimation, emphasizing how uncertainty can enhance predictive performance, model interpretability, and trust-building in AI applications.
The recent advances and availability of computer hardware, software tools, and massive digital data archives have enabled the rapid development of artificial intelligence (AI) applications. Concerns over whether AI tools can communicate decisions to radiologists and primary care physicians is of particular importance because automated clinical decisions can substantially impact patient outcome. A challenge facing the clinical implementation of AI stems from the potential lack of trust clinicians have in these predictive models. This review will expand on the existing literature on interpretability methods for deep learning and review the state-of-the-art methods for predictive uncertainty estimation for computer-assisted segmentation tasks. Last, we discuss how uncertainty can improve predictive performance and model interpretability and can act as a tool to help foster trust. (C) RSNA, 2021.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据