4.7 Article

Top-Push Constrained Modality-Adaptive Dictionary Learning for Cross-Modality Person Re-Identification

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2019.2939564

关键词

Cross-modality person re-identification; data bias; asymmetric mapping; top-push constrained dictionary learning; domain adaptation

向作者/读者索取更多资源

Person re-identification aims to match person captured by multiple non-overlapping cameras that mainly mean standard RGB cameras. In contemporary surveillance, cameras of different modalities such as infrared cameras and depth cameras are introduced because of their unique advantages in poor illumination scenarios. However, re-identifying the persons across such cameras of different modalities is extremely difficult and, unfortunately, seldom discussed. It is mainly caused by extremely different appearances of the person shown under such different camera modalities. In this paper, we tackle this challenging cross-modality people re-identification through a top-push constrained modality-adaptive dictionary learning. The proposed model asymmetrically projects the heterogeneous features from dissimilar modalities onto a common space. In this way, the modality-specific bias is mitigated. Thus, the heterogeneous data can be simultaneously enforced by a shared dictionary in a canonical space. Moreover, a top-push ranking graph regularization is embedded in the proposed model to improve the discriminability, which efficiently further boosts the matching accuracy. In order to implement the proposed model, an iterative process is developed in this paper to optimize these two processes jointly. Extensive experiments on the benchmark SYSU-MM01 and BIWI RGBD-ID person re-identification datasets show promising results which outperform state-of-the-art methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据