4.4 Article

Dual-granularity feature fusion in visible-infrared person re-identification

期刊

IET IMAGE PROCESSING
卷 -, 期 -, 页码 -

出版社

WILEY
DOI: 10.1049/ipr2.12999

关键词

computer vision; convolutional neural nets; image retrieval; pedestrians

向作者/读者索取更多资源

This paper proposes a novel dual-granularity feature fusion network for VI-ReID, aiming to enhance representation and robustness by fusing global and local features. Moreover, an identity-aware modal discrepancy loss is proposed to dynamically align the cross-modal distribution of pedestrians and reduce modality discrepancies.
Visible-infrared person re-identification (VI-ReID) aims to recognize images of the same person captured in different modalities. Existing methods mainly focus on learning single-granularity representations, which have limited discriminability and weak robustness. This paper proposes a novel dual-granularity feature fusion network for VI-ReID. Specifically, a dual-branch module that extracts global and local features and then fuses them to enhance the representative ability is adopted. Furthermore, an identity-aware modal discrepancy loss that promotes modality alignment by reducing the gap between features from visible and infrared modalities is proposed. Finally, considering the influence of non-discriminative information in the modal-shared features of RGB-IR, a greyscale conversion is introduced to extract modality-irrelevant discriminative features better. Extensive experiments on the SYSU-MM01 and RegDB datasets demonstrate the effectiveness of the framework and superiority over state-of-the-art methods. This paper proposes a novel dual-granularity feature fusion network for VI-ReID, aiming to enhance representation and robustness by fusing global and local features. Moreover, an identity-aware modal discrepancy loss is proposed to dynamically align the cross-modal distribution of pedestrians and reduce modality discrepancies.image

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据