4.7 Article

Learning diverse and deep clues for person reidentification

期刊

IMAGE AND VISION COMPUTING
卷 126, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.imavis.2022.104551

关键词

Attention network; convolutional neural net; work; Grouped pyramid; Global feature; Local features; Person re -identification

资金

  1. National Key R&D Program of China
  2. [2021YFC2009200]

向作者/读者索取更多资源

This paper proposes a two-stage attention network, WDC-Net, for person re-identification. It focuses on enhancing the connection between local features and extracting robust feature representations. Experimental results show that the proposed method achieves competitive performance on large-scale datasets and performs well on challenging datasets.
Extracting robust features has been the core of person re-identification(ReID). The existing convolutional neural network-based methods pay more attention to local features rather than the connection between local features. Given that human bodies possess certain structural information, it is absolutely necessary to strengthen the con-nection of local features for the ReID task. This paper proposes a two-stage attention network termed Width and Depth Channel Attention Network (WDC-Net) for ReID. Unlike conventional attention-based methods, which only focus on the single local features, our network exploits diverse feature representations to alleviate the miss-ing information problem caused by occlusion. Precisely, for the first stage, it splits the local associations of the fea-ture map through a multi-scale perspective to extract relatively independent multi-level local features of the human body. For the second stage, the correlation of multi-level local features is reconstructed through grouped pyramid structure to obtain a more robust global feature representation. We also propose an adaptive margin weight adjustment strategy to enhance the adaptability of the attention weights. Large-scale ReID datasets are tested to evaluate our method. On Market1501 and DukeMTMC, the proposed method achieves 90.7 % /96.4% mAP/R-1 and 81.8 % /90.8% mAP/R-1, respectively. It is worth highlighting that the proposed method also achieves 55.3 % /65.3% mAP/R-1 on the challenging Occluded-Duke dataset. Extensive experimental results dem-onstrate the superiority of our method, which achieves state-of-the-art performance on ReID.(c) 2022 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据