4.6 Article

Combining multilevel feature extraction and multi-loss learning for person re-identification

期刊

NEUROCOMPUTING
卷 334, 期 -, 页码 68-78

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2019.01.005

关键词

Multilevel feature extraction; Multi-loss learning; Recurrent comparative network

资金

  1. National Natural Science Foundation of China [61673274]

向作者/读者索取更多资源

The goal of person re-identification (re-id) is to match images of the same person captured by multiple cameras with non-overlapping views. It is a challenging task due to the large spatial displacement and human pose change of person images across different views. Recently, the deep Convolutional Neural Network (CNN) has significantly improved the performance of person re-id. In this paper, we present a hybrid deep model that combines multilevel feature extraction and multi-loss learning for more robust pedestrian descriptors. The multi-loss function jointly optimizes the verification task that aims to verify if two images belong to same person, and the recognition task that aims to predict the identity of each image. Specifically, given two person images, we first apply a deep learning network, called Feature Aggregation Network (FAN), to extract their multilevel CNN features by fusing the information of different layers. For the verification task, a Recurrent Comparative Network (RCN) is presented to learn joint representation of paired CNN features. RCN determines whether two images depict the same person through focusing on discriminative regions and alternatively comparing their appearance. It is an algorithmic imitation of human decision-making process, in which a person repeatedly compares two objects before making decision about their similarity. For the recognition task, a parameter-free operation termed Global Average Pooling (GAP) is followed after each CNN feature to extract identity-related features. Extensive experiments are conducted on four datasets, including CUHK03, CUHK01, Market1501 and DukeMTMC, and the experimental results demonstrate the effectiveness of our presented method. (C) 2019 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据