Journal
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Volume 30, Issue 12, Pages 4554-4566Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2019.2939564
Keywords
Cross-modality person re-identification; data bias; asymmetric mapping; top-push constrained dictionary learning; domain adaptation
Categories
Ask authors/readers for more resources
Person re-identification aims to match person captured by multiple non-overlapping cameras that mainly mean standard RGB cameras. In contemporary surveillance, cameras of different modalities such as infrared cameras and depth cameras are introduced because of their unique advantages in poor illumination scenarios. However, re-identifying the persons across such cameras of different modalities is extremely difficult and, unfortunately, seldom discussed. It is mainly caused by extremely different appearances of the person shown under such different camera modalities. In this paper, we tackle this challenging cross-modality people re-identification through a top-push constrained modality-adaptive dictionary learning. The proposed model asymmetrically projects the heterogeneous features from dissimilar modalities onto a common space. In this way, the modality-specific bias is mitigated. Thus, the heterogeneous data can be simultaneously enforced by a shared dictionary in a canonical space. Moreover, a top-push ranking graph regularization is embedded in the proposed model to improve the discriminability, which efficiently further boosts the matching accuracy. In order to implement the proposed model, an iterative process is developed in this paper to optimize these two processes jointly. Extensive experiments on the benchmark SYSU-MM01 and BIWI RGBD-ID person re-identification datasets show promising results which outperform state-of-the-art methods.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available