4.1 Article

A historical perspective of biomedical explainable AI research

期刊

PATTERNS
卷 4, 期 9, 页码 -

出版社

CELL PRESS
DOI: 10.1016/j.patter.2023.100830

关键词

-

向作者/读者索取更多资源

This study aimed to analyze the association between COVID-19 and the advancement of explainable artificial intelligence (XAI) research. By extracting relevant studies from the PubMed database and manual labeling, the study found that the emergence of COVID-19 may have driven the attention towards XAI and accelerated its development trends.
The black-box nature of most artificial intelligence (AI) models encourages the development of explainability methods to engender trust into the AI decision-making process. Such methods can be broadly categorized into two main types: post hoc explanations and inherently interpretable algorithms. We aimed at analyzing the possible associations between COVID-19 and the push of explainable AI (XAI) to the forefront of biomed-ical research. We automatically extracted from the PubMed database biomedical XAI studies related to con-cepts of causality or explainability and manually labeled 1,603 papers with respect to XAI categories. To compare the trends pre-and post-COVID-19, we fit a change point detection model and evaluated significant changes in publication rates. We show that the advent of COVID-19 in the beginning of 2020 could be the driving factor behind an increased focus concerning XAI, playing a crucial role in accelerating an already evolving trend. Finally, we present a discussion with future societal use and impact of XAI technologies and potential future directions for those who pursue fostering clinical trust with interpretable machine learning models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.1
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据