4.6 Article

Successive Consensus Clustering for Unsupervised Video-Based Person Re-Identification

Journal

IEEE SIGNAL PROCESSING LETTERS
Volume 29, Issue -, Pages 822-826

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LSP.2022.3156443

Keywords

Feature extraction; Training; Clustering algorithms; Optimization; Signal processing algorithms; Neural networks; Memory modules; Unsupervised; person re-identification; consensus clustering; contrastive learning

Funding

  1. National Natural Science Foundation of China [62072482]
  2. Guangdong HongKong-Macao Greater Bay Area International Science and Technology Innovation Cooperation Project [2021A0505030080]

Ask authors/readers for more resources

This paper focuses on unsupervised video-based person re-identification and proposes a Successive Consensus Clustering framework for optimizing pseudo-labels and the model. By leveraging multiple frames clustering and a cluster successive memory mechanism, the method achieves improved model training and performance.
Person re-identification is to match the same person between non-overlapping cameras. This paper focuses on unsupervised video-based person re-identification. The mainstream approach is to obtain pseudo-labels by clustering samples for training the classification model. In this scheme, a potential threat is that noisy pseudo-labels may damage the optimization of the model. To mitigate this danger, we propose using a Successive Consensus Clustering framework for optimizing the pseudo-labels and the model iteratively. First, we leverage consensus clustering with respect to multiple frames of a video, which can generate high-quality pseudo-labels for pedestrians. Secondly, we develop contrastive learning based on the cluster successive memory mechanism, which can establish the correlation between different epochs of clustering so that makes the training of the model stable. Experiments on three large-scale data sets show that our method outperforms the previous state-of-the-art method, surpassing 10.6% for rank-1 and 18.6% for mAP on Mars, and 9.6% for rank-1 and 13.3% for mAP on DukeMTMC-VideoReID.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available