3.8 Proceedings Paper

Teaching Where to Look: Attention Similarity Knowledge Distillation for Low Resolution Face Recognition

期刊

COMPUTER VISION, ECCV 2022, PT XII
卷 13672, 期 -, 页码 631-647

出版社

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-19775-8_37

关键词

Attention similarity knowledge distillation; Cosine similarity; Low resolution face recognition

资金

  1. ICT R&D program of MSIT/IITP [2020-0-00857]
  2. Korea Institute of Energy Technology Evaluation and Planning (KETEP) - Korea government (MOTIE) [20202910100030]
  3. Electronics and Telecommunications Research Institute (ETRI) - Korean government [22ZR1100]
  4. Institute for Information & Communication Technology Planning & Evaluation (IITP), Republic of Korea [22ZR1100] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)
  5. Korea Evaluation Institute of Industrial Technology (KEIT) [20202910100030] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

向作者/读者索取更多资源

This study proposes an attention similarity knowledge distillation approach to boost face recognition performance for low-resolution images by transferring attention maps from a high-resolution network to a low-resolution network.
Deep learning has achieved outstanding performance for face recognition benchmarks, but performance reduces significantly for low resolution (LR) images. We propose an attention similarity knowledge distillation approach, which transfers attention maps obtained from a high resolution (HR) network as a teacher into an LR network as a student to boost LR recognition performance. Inspired by humans being able to approximate an object's region from an LR image based on prior knowledge obtained from HR images, we designed the knowledge distillation loss using the cosine similarity to make the student network's attention resemble the teacher network's attention. Experiments on various LR face related benchmarks confirmed the proposed method generally improved recognition performances on LR settings, outperforming state-of-the-art results by simply transferring well-constructed attention maps. The code and pretrained models are publicly available in the https://github.com/gist-ailab/teaching-where-to-look.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据