4.6 Article

xCos: An Explainable Cosine Metric for Face Verification Task

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3469288

关键词

XAI; xCos; face verification; face recognition; explainable AI; explainable artificial intelligence

资金

  1. Ministry of Science and Technology, Taiwan [MOST 110-2634-F-002-026]
  2. Qualcomm Technologies, Inc.

向作者/读者索取更多资源

In this paper, a novel similarity metric xCos is proposed for face verification models to provide meaningful explanations. The effectiveness of this method has been demonstrated on LFW and various competitive benchmarks, ensuring both model interpretability and accuracy.
We study the XAI (explainable AI) on the face recognition task, particularly the face verification. Face verification has become a crucial task in recent days and it has been deployed to plenty of applications, such as access control, surveillance, and automatic personal log-on for mobile devices. With the increasing amount of data, deep convolutional neural networks can achieve very high accuracy for the face verification task. Beyond exceptional performances, deep face verification models need more interpretability so that we can trust the results they generate. In this article, we propose a novel similarity metric, called explainable cosine (xCos), that comes with a learnable module that can be plugged into most of the verification models to provide meaningful explanations. With the help of xCos, we can see which parts of the two input faces are similar, where the model pays its attention to, and how the local similarities are weighted to form the output xCos score. We demonstrate the effectiveness of our proposed method on LFW and various competitive benchmarks, not only resulting in providing novel and desirable model interpretability for face verification but also ensuring the accuracy as plugging into existing face recognition models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据