4.7 Article

Collaborative learning with corrupted labels

Journal

NEURAL NETWORKS
Volume 125, Issue -, Pages 205-213

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2020.02.010

Keywords

Deep neural networks; Corrupted labels; Robustness

Funding

  1. Key Projects of Ministry of Science and Technology of China [2018AAA 0101604, 2018YFB1702903]
  2. National Natural Science Foundation of China [61906106, 61936009]

Ask authors/readers for more resources

Deep neural networks (DNNs) have been very successful for supervised learning. However, their high generalization performance often comes with the high cost of annotating data manually. Collecting low-quality labeled dataset is relatively cheap, e.g., using web search engines, while DNNs tend to overfit to corrupted labels easily. In this paper, we propose a collaborative learning (co-learning) approach to improve the robustness and generalization performance of DNNs on datasets with corrupted labels. This is achieved by designing a deep network with two separate branches, coupled with a relabeling mechanism. Co-learning could safely recover the true labels of most mislabeled samples, not only preventing the model from overfitting the noise, but also exploiting useful information from all the samples. Although being very simple, the proposed algorithm is able to achieve high generalization performance even a large portion of the labels are corrupted. Experiments show that co-learning consistently outperforms existing state-of-the-art methods on three widely used benchmark datasets. (c) 2020 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available