4.6 Review

Explainability of Automated Fact Verification Systems: A Comprehensive Review

期刊

APPLIED SCIENCES-BASEL
卷 13, 期 23, 页码 -

出版社

MDPI
DOI: 10.3390/app132312608

关键词

automated fact verification; AFV; explainable artificial intelligence; XAI; explainable AFV

向作者/读者索取更多资源

This study explores the importance of explainability in the field of Automated Fact Verification (AFV) and highlights the current gaps and limitations. It finds that explainability in AFV lags behind the broader field of explainable AI (XAI). The study summarizes the elements of explainability in AFV, including architectural, methodological, and dataset-related aspects, and proposes possible recommendations for modifications to enhance the comprehensibility and acceptability of AI to the general society.
The rapid growth in Artificial Intelligence (AI) has led to considerable progress in Automated Fact Verification (AFV). This process involves collecting evidence for a statement, assessing its relevance, and predicting its accuracy. Recently, research has begun to explore automatic explanations as an integral part of the accuracy analysis process. However, the explainability within AFV is lagging compared to the wider field of explainable AI (XAI), which aims at making AI decisions more transparent. This study looks at the notion of explainability as a topic in the field of XAI, with a focus on how it applies to the specific task of Automated Fact Verification. It examines the explainability of AFV, taking into account architectural, methodological, and dataset-related elements, with the aim of making AI more comprehensible and acceptable to general society. Although there is a general consensus on the need for AI systems to be explainable, there a dearth of systems and processes to achieve it. This research investigates the concept of explainable AI in general and demonstrates its various aspects through the particular task of Automated Fact Verification. This study explores the topic of faithfulness in the context of local and global explainability. This paper concludes by highlighting the gaps and limitations in current data science practices and possible recommendations for modifications to architectural and data curation processes, contributing to the broader goals of explainability in Automated Fact Verification.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据