4.4 Article

Cross-modality person re-identification using hybrid mutual learning

期刊

IET COMPUTER VISION
卷 17, 期 1, 页码 1-12

出版社

WILEY
DOI: 10.1049/cvi2.12123

关键词

-

向作者/读者索取更多资源

This paper proposes a hybrid mutual learning method for cross-modality person re-identification, aiming to establish a collaborative relationship between RGB modality and IR modality. The method reduces the distribution gap by mutual learning from local features and triplet relations, and fuses feature information using hierarchical attention aggregation.
Cross-modality person re-identification (Re-ID) aims to retrieve a query identity from red, green, blue (RGB) images or infrared (IR) images. Many approaches have been proposed to reduce the distribution gap between RGB modality and IR modality. However, they ignore the valuable collaborative relationship between RGB modality and IR modality. Hybrid Mutual Learning (HML) for cross-modality person Re-ID is proposed, which builds the collaborative relationship by using mutual learning from the aspects of local features and triplet relation. Specifically, HML contains local-mean mutual learning and triplet mutual learning where they focus on transferring local representational knowledge and structural geometry knowledge so as to reduce the gap between RGB modality and IR modality. Furthermore, Hierarchical Attention Aggregation is proposed to fuse local feature maps and local feature vectors to enrich the information of the classifier input. Extensive experiments on two commonly used data sets, that is, SYSU-MM01 and RegDB verify the effectiveness of the proposed method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据