4.7 Article

Cross-modality disentanglement and shared feedback learning for infrared-visible person re-identification

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 252, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2022.109337

Keywords

Cross-modalitypersonre-identification; Generationadversarialnetwork; Jointlearningframework; Sharedfeedback

Funding

  1. National Natural Science Foundation of China [61802111, 62002100]
  2. Key R&D and Promotion Projects in Henan Province, China [212102210411]

Ask authors/readers for more resources

This research proposes a novel cross-modality disentanglement and shared feedback learning framework for infrared-visible person re-identification. With two networks, the framework achieves modality-level and feature-level alignment while maintaining identity-consistency. Experimental results demonstrate that the method achieves competitive performance compared to state-of-the-art methods.
Infrared-visible person re-identification (IV-ReID) has become a research hotspot in the field of computer vision. Compared with traditional person re-identification, the IV-ReID task is still very challenging due to huge difference between modalities. Most existing approaches are designed to bridge the cross-modality gap through single feature-level constraints, but the results are not very satisfactory. To this end, a novel cross-modality disentanglement and shared feedback (CMDSF) learning framework is proposed. The framework consists of a cross-modality images disentanglement network (CMIDN) and a dual-path shared feedback learning network (DSFLN). Specifically, the former uses a pairing strategy to more efficiently disentangle the cross-modality features and constrain the feature distribution distances between modalities. It achieves modality-level alignment while maintaining specific identity-consistency. The latter adopts a dual-path shared module (DSM) to obtain discriminative mid-level feature information, and achieves feature-level alignment. Furthermore, a feedback scoring module (FSM) with a negative feedback mechanism is proposed to compensate for the weak supervision defect of objective loss during backpropagation. It optimizes model parameters by providing a strong feedback signal. In summary, we propose an efficient learning framework with two parts jointly trained and optimized in an end-to-end manner. Extensive experimental results on two cross-modality datasets demonstrate that our method achieves a competitive performance compared with the state-of-the-art methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available