4.7 Article

Incremental Generative Occlusion Adversarial Suppression Network for Person ReID

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 30, Issue -, Pages 4212-4224

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2021.3070182

Keywords

Feature extraction; Training; Image reconstruction; Body regions; Training data; Cameras; Two dimensional displays; Batch-based incremental occlusion; occlusion suppression; occluded person re-identification

Funding

  1. National Natural Science Foundation of China (NSFC) [62076184, 61673299, 61976160, 61573255]
  2. Key Laboratory of Advanced Theory and Application in Statistics and Data Science, East China Normal University, Ministry of Education
  3. Fundamental Research Funds for the Central Universities

Ask authors/readers for more resources

In this study, a novel Incremental Generative Occlusion Adversarial Suppression (IGOAS) network is proposed to address the occlusion problem in person re-identification. The network gradually learns occlusion difficulty instead of directly learning the hardest occlusion, improving the network's robustness. Experimental results show the competitive performance of IGOAS on occluded dataset, achieving decent Rank-1 accuracy and mAP.
Person re-identification (re-id) suffers from the significant challenge of occlusion, where an image contains occlusions and less discriminative pedestrian information. However, certain work consistently attempts to design complex modules to capture implicit information (including human pose landmarks, mask maps, and spatial information). The network, consequently, focuses on discriminative features learning on human non-occluded body regions and realizes effective matching under spatial misalignment. Few studies have focused on data augmentation, given that existing single-based data augmentation methods bring limited performance improvement. To address the occlusion problem, we propose a novel Incremental Generative Occlusion Adversarial Suppression (IGOAS) network. It consists of 1) an incremental generative occlusion block, generating easy-to-hard occlusion data, that makes the network more robust to occlusion by gradually learning harder occlusion instead of hardest occlusion directly. And 2) a global-adversarial suppression (G&A) framework with a global branch and an adversarial suppression branch. The global branch extracts steady global features of the images. The adversarial suppression branch, embedded with two occlusion suppression module, minimizes the generated occlusion's response and strengthens attentive feature representation on human non-occluded body regions. Finally, we get a more discriminative pedestrian feature descriptor by concatenating two branches' features, which is robust to the occlusion problem. The experiments on the occluded dataset show the competitive performance of IGOAS. On Occluded-DukeMTMC, it achieves 60.1% Rank-1 accuracy and 49.4% mAP.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available