期刊
MULTIMEDIA TOOLS AND APPLICATIONS
卷 80, 期 13, 页码 20687-20705出版社
SPRINGER
DOI: 10.1007/s11042-021-10671-z
关键词
Person re-identification; Collaborative representation; Cross-view learning; Kernel method
类别
资金
- National Natural Science Foundation of China [61806099]
- Natural Science Foundation of Jiangsu Province of China [BK20180790]
- Natural Science Research of Jiangsu Higher Education Institutions of China [8KJB520033]
- Research Start-up Fund of NUIST [2243141701077]
- Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD)
- Engineering Research Center of Digital Forensics, Ministry of Education
Person re-identification (re-ID) faces challenges due to big visual changes of the same individual under different views. Extracting powerful feature representations from pedestrian images is a reasonable solution. The proposed CV-KCRC method aims to find more robust and discriminative feature representations by projecting image features into a common low dimensional subspace, outperforming many state-of-the-art algorithms in experiments on seven commonly used datasets.
Currently, person re-identification (re-ID) has been applied in many public security applications. Yet owing to the big visual appearance changes of the same identity under different views, re-ID still faces many challenges. To reduce the intra-person discrepancy, extracting more power feature representations from pedestrian images is a reasonable solution. We propose a cross-view kernel collaborative representation based classification (CV-KCRC) method for person re-ID in our work. Our method aims to find more robust and discriminative feature representations that embody cross-view information to enhance the identification capability of features. We map the image features into a high dimensional feature space first and then use view-specific projection matrices to project the high dimensional features into a common low dimensional subspace. We expect that in the shared subspace the codings of same person from different views have the highest similarity and better performance can be achieved. Experiments on seven commonly used datasets reveal that our algorithm outperforms many state-of-the-art algorithms.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据