4.7 Article

Beyond modality alignment: Learning part-level representation for visible-infrared person re-identification

Journal

IMAGE AND VISION COMPUTING
Volume 108, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.imavis.2021.104118

Keywords

Visible-infrared person re-identification; Modality alignment; Part-aware feature learning; Hierarchical modality discriminator

Ask authors/readers for more resources

VI-reID aims to automatically retrieve pedestrian of interest exposed to sensors in different modalities, but existing work mainly focuses on tackling the modality difference without investigating discriminant information at a fine-grained level. The proposed DAPR framework addresses this issue by simultaneously alleviating modality bias and mining different levels of discriminant representations.
Visible-Infrared person re-IDentification (VI-reID) aims to automatically retrieve the pedestrian of interest exposed to sensors in different modalities, such as visible camera v.s. infrared sensor. It struggles to learn both modality-invariant and discriminant representations. Unfortunately, existing VI-reID work mainly focuses on tackling the modality difference, which fine-grained level discriminant information has not been well investigated. This causes inferior identification performance. To address the problem, we propose a Dual-Alignment Part-aware Representation (DAPR) framework to simultaneously alleviate the modality bias and mine different level of discriminant representations. Particularly, our DAPR reduces modality discrepancy of high-level features hierarchically by back-propagating reversal gradients from a modality classifier, in order to learn a modality invariant feature space. And meanwhile, multiple heads of classifiers with the improved part-aware BNNeck are integrated to supervise the network producing identity-discriminant representations w.r.t. both local details and global structures in the learned modality-invariant space. By training in an end-to-end manner, the proposed DAPR produces camera-modality-invariant yet discriminant features1 for the purpose of person matching across modalities. Extensive experiments are conducted on two benchmarks, i.e., SYSU MM01 and RegDB, and the results demonstrate the effectiveness of our proposed method. (C) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available