4.7 Article

DotSCN: Group Re-Identification via Domain-Transferred Single and Couple Representation Learning

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2020.3031303

Keywords

Layout; Feature extraction; Task analysis; Training; Training data; Cameras; Deep learning; Group re-identification; domain transfer; couple representation; video surveillance; deep learning

Funding

  1. Ministry of Science and Technology, Taiwan [MOST 109-2634-F-007-013]
  2. Qualcomm Technologies, Inc., USA, through the Taiwan University Research Collaboration Project [NAT-410478]
  3. [18F18378]

Ask authors/readers for more resources

In this article, a novel method DotSCN is proposed for group re-identification, which improves the performance of G-ReID through domain transfer and innovative couple representation learning.
Group re-identification (G-ReID) is an important yet less-studied task. Its challenges not only lie in appearance changes of individuals, but also involve group layout and membership changes. To address these issues, the key task of G-ReID is to learn group representations robust to such changes. Nevertheless, unlike ReID tasks, there still lacks comprehensive publicly available G-ReID datasets, making it difficult to learn effective representations using deep learning models. In this article, we propose a Domain-Transferred Single and Couple Representation Learning Network (DotSCN). Its merits are two aspects: 1) Owing to the lack of labelled training samples for G-ReID, existing G-ReID methods mainly rely on unsatisfactory hand-crafted features. To gain the power of deep learning models in representation learning, we first treat a group as a collection of multiple individuals and propose transferring the representation of individuals learned from an existing labeled ReID dataset to a target G-ReID domain without a suitable training dataset. 2) Taking into account the neighborhood relationship in a group, we further propose learning a novel couple representation between two group members, that achieves better discriminative power in G-ReID tasks. In addition, we propose a weight learning method to adaptively fuse the domain-transferred individual and couple representations based on an L-shape prior. Extensive experimental results demonstrate the effectiveness of our approach that significantly outperforms state-of-the-art methods by 11.7% CMC-1 on the Road Group dataset and by 39.0% CMC-1 on the DukeMCMT dataset.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available