4.7 Article

Re-Ranking High-Dimensional Deep Local Representation for NIR-VIS Face Recognition

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 28, Issue 9, Pages 4553-4565

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2019.2912360

Keywords

Heterogeneous face recognition; NIR-VIS face matching; re-ranking

Funding

  1. National Natural Science Foundation of China [61806152, 61876142, 61432014, U1605252, 61772402, 61671339, 61702394]
  2. National Key Research and Development Program of China [2016QY01W0200]
  3. Key Industrial Innovation Chain in Industrial Domain [2016KTZDGY04-02]
  4. National High-Level Talents Special Support Program of China [CS31117200001]
  5. Young Elite Scientists Sponsorship Program by CAST [2016QNRC001]
  6. Young Talent fund of University Association for Science and Technology in Shaanxi, China
  7. CCF-Tencent Open Fund
  8. China 111 Project [B16037]
  9. China Post-Doctoral Science Foundation [2018M631124]
  10. Fundamental Research Funds for the Central Universities [JB190117, JB191502]
  11. Xidian University-Intellifusion Joint Innovation Laboratory of Artificial Intelligence

Ask authors/readers for more resources

Heterogeneous face recognition refers to matching facial images captured from different sensors or sources, which has wide applications in public security and law enforcement. Because of the great differences in sensing and creating procedure, there is a huge feature gap between heterogeneous facial images. The existing methods merely focus on comparing the probe image with the gallery in feature space, while the true target may not appear at the first rank due to the appearance variations caused by different sensing patterns. In order to exploit valuable information from the initial ranking result, this paper proposes to re-rank high-dimensional deep local representation for matching near-infrared (NIR) and visual (VIS) facial images, i.e., NIR-VIS face recognition. A high-dimensional deep local representation is first constructed by extracting and concatenating deep features on local facial patches via a convolutional neural network (CNN). The initial NIR-VIS recognition ranking results can be obtained by comparing the compressed deep features. We then propose a novel and efficient locally linear re-ranking (LLRe-Rank) technique to refine the initial ranking results, which can explore valuable information from the initial ranking result. The proposed re-ranking method does not require any human interaction or data annotation and can be served as an unsupervised postprocessing technique. The experimental results on the most challenging Oulu-CASIA NIR-VIS database and CASIA NIR-VIS 2.0 database demonstrate the effectiveness of our method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available