4.7 Article

Few-Shot Deep Adversarial Learning for Video-Based Person Re-Identification

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 29, Issue -, Pages 1233-1245

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2019.2940684

Keywords

Feature extraction; Cameras; Visualization; Training; Measurement; Recurrent neural networks; Video sequences; Video-based person re-identification; variational recurrent neural networks; adversarial learning

Funding

  1. National Natural Science Foundation of China [61806035]
  2. National Key Research and Development Program of China [2018YFB0804205]
  3. NSFC [61725203, 61732008]

Ask authors/readers for more resources

Video-based person re-identification (re-ID) refers to matching people across camera views from arbitrary unaligned video footages. Existing methods rely on supervision signals to optimise a projected space under which the distances between inter/intra-videos are maximised/minimised. However, this demands exhaustively labelling people across camera views, rendering them unable to be scaled in large networked cameras. Also, it is noticed that learning effective video representations with view invariance is not explicitly addressed for which features exhibit different distributions otherwise. Thus, matching videos for person re-ID demands flexible models to capture the dynamics in time-series observations and learn view-invariant representations with access to limited labeled training samples. In this paper, we propose a novel few-shot deep learning approach to video-based person re-ID, to learn comparable representations that are discriminative and view-invariant. The proposed method is developed on the variational recurrent neural networks (VRNNs) and trained adversarially to produce latent variables with temporal dependencies that are highly discriminative yet view-invariant in matching persons. Through extensive experiments conducted on three benchmark datasets, we empirically show the capability of our method in creating view-invariant temporal features and state-of-the-art performance achieved by our method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available