4.0 Article

Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey

期刊

MACHINE LEARNING AND KNOWLEDGE EXTRACTION
卷 3, 期 4, 页码 966-989

出版社

MDPI
DOI: 10.3390/make3040048

关键词

interpretability; explainer; explanator; explainable AI; trust; ethics; black box; Deep Neural Network

资金

  1. Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB

向作者/读者索取更多资源

Deep learning is a state-of-the-art technique used for inferring extensive or complex data, but its black box nature can lead to lack of transparency and potential bias. Explainers have been developed to address this issue by providing insight into the inner workings of machine learning models like Deep Neural Networks.
Deep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized as being non-transparent and their predictions not traceable by humans. Furthermore, the models learn from artificially generated datasets, which often do not reflect reality. By basing decision-making algorithms on Deep Neural Networks, prejudice and unfairness may be promoted unknowingly due to a lack of transparency. Hence, several so-called explanators, or explainers, have been developed. Explainers try to give insight into the inner structure of machine learning black boxes by analyzing the connection between the input and output. In this survey, we present the mechanisms and properties of explaining systems for Deep Neural Networks for Computer Vision tasks. We give a comprehensive overview about the taxonomy of related studies and compare several survey papers that deal with explainability in general. We work out the drawbacks and gaps and summarize further research ideas.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.0
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据