4.7 Article

Does explainable machine learning uncover the black box in vision applications?

期刊

IMAGE AND VISION COMPUTING
卷 118, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.imavis.2021.104353

关键词

Explainable machine learning; Deep learning; Vision; Signal processing

资金

  1. SERB [MTR/2020/000335]

向作者/读者索取更多资源

Machine learning and deep learning are widely used in vision applications, but the issue of explainability has limitations and may not effectively uncover black box models. To improve explainability, more rigorous principles in related areas should be relied upon.
Machine learning (ML) in general and deep learning (DL) in particular has become an extremely popular tool in several vision applications (like object detection, super resolution, segmentation, object tracking etc.). Almost in parallel, the issue of explainability in ML (i.e. the ability to explain/elaborate the way a trained ML model arrived at its decision) in vision has also received fairly significant attention from various quarters. However, we argue that the current philosophy behind explainable ML suffers from certain limitations, and the resulting explanations may not meaningfully uncover black box ML models. To elaborate our assertion, we first raise a few fundamental questions which have not been adequately discussed in the corresponding literature. We also provide perspectives on how explainablity in ML can benefit by relying on more rigorous principles in the related areas.(c) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据