期刊
IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 30, 期 -, 页码 7815-7829出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2021.3104169
关键词
Noise measurement; Training; Prototypes; Reliability; Training data; Adaptation models; Refining; Person re-ID; unsupervised domain adaption; pseudo label noise
资金
- National Natural Science Foundation of China [62088102]
- PKU-NTU Joint Research Institute (JRI) - Ng Teng Fong Charitable Foundation
In this study, a novel approach called Dual-Refinement is proposed to jointly refine pseudo labels and features in both offline clustering and online training phases, improving the label purity and feature discriminability in the target domain for more reliable re-ID. This method effectively reduces the influence of noisy labels and refines learned features within the alternative training process, outperforming state-of-the-art methods by a large margin according to experiments.
Unsupervised domain adaptive (UDA) person re-identification (re-ID) is a challenging task due to the missing of labels for the target domain data. To handle this problem, some recent works adopt clustering algorithms to off-line generate pseudo labels, which can then be used as the supervision signal for on-line feature learning in the target domain. However, the off-line generated labels often contain lots of noise that significantly hinders the discriminability of the on-line learned features, and thus limits the final UDA re-ID performance. To this end, we propose a novel approach, called Dual-Refinement, that jointly refines pseudo labels at the off-line clustering phase and features at the on-line training phase, to alternatively boost the label purity and feature discriminability in the target domain for more reliable re-ID. Specifically, at the off-line phase, a new hierarchical clustering scheme is proposed, which selects representative prototypes for every coarse cluster. Thus, labels can be effectively refined by using the inherent hierarchical information of person images. Besides, at the on-line phase, we propose an instant memory spread-out (IM-spread-out) regularization, that takes advantage of the proposed instant memory bank to store sample features of the entire dataset and enable spread-out feature learning over the entire training data instantly. Our Dual-Refinement method reduces the influence of noisy labels and refines the learned features within the alternative training process. Experiments demonstrate that our method outperforms the state-of-the-art methods by a large margin.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据