4.7 Article

Deep Representation Learning With Part Loss for Person Re-Identification

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 28, Issue 6, Pages 2860-2871

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2019.2891888

Keywords

Person re-identification; representation learning; part lass networks; convolutional neural networks

Funding

  1. National Postdoctoral Programme for Innovative Talents
  2. National Nature Science Foundation of China [61525206, 61532009, 61721004, U1705262, 61572050, 91538111, 61620106009]
  3. Key Research Program of Frontier Sciences, CAS [QYZDJ-SSW-JSC039]

Ask authors/readers for more resources

Learning discriminative representations for unseen person images is critical for person re-identification (ReID). Most of the current approaches learn deep representations in classification tasks, which essentially minimize the empirical classification risk on the training set. As shown in our experiments, such representations easily get over-fitted on a discriminative human body part on the training set. To gain the discriminative power on unseen person images, we propose a deep representation learning procedure named part loss network, to minimize both the empirical classification risk on training person images and the representation learning risk on unseen person images. The representation learning risk is evaluated by the proposed part loss, which automatically detects human body parts and computes the person classification loss on each part separately. Compared with traditional global classification loss, simultaneously considering part loss enforces the deep network to learn representations for different body parts and gain the discriminative power on unseen persons. Experimental results on three person ReID datasets, i.e., Market1501, CUHK03, and VIPeR, show that our representation outperforms existing deep representations.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available