4.7 Article

How to handle noisy labels for robust learning from uncertainty

Journal

NEURAL NETWORKS
Volume 143, Issue -, Pages 209-217

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2021.06.012

Keywords

Deep network; Noisy labels; Robust learning; Uncertain aware joint training

Funding

  1. Basic Science Research Program through the National Research Foundation of Korea (NRF) - Ministry of Education [NRF-2019R1I1A3A02058096, NRF-2020R1A6A1A12047945]
  2. Brain Research Program through the National Research Foundation of Korea (NRF) - Ministry of Science, ICT & Future Planning [NRF-2017M3C7A1044815]
  3. Grand Information Technology Research Center support program [IITP-2021-2020-0-01462]
  4. INHA UNIVERSITY Research Grant, South Korea

Ask authors/readers for more resources

This paper investigates the robust training problem of deep neural networks with noisy labels and proposes a novel training method called UACT, which combines uncertainty with clean labels to avoid overfitting issues and achieve good generalization performance.
Most deep neural networks (DNNs) are trained with large amounts of noisy labels when they are applied. As DNNs have the high capacity to fit any noisy labels, it is known to be difficult to train DNNs robustly with noisy labels. These noisy labels cause the performance degradation of DNNs due to the memorization effect by over-fitting. Earlier state-of-the-art methods used small loss tricks to efficiently resolve the robust training problem with noisy labels. In this paper, relationship between the uncertainties and the clean labels is analyzed. We present novel training method to use not only small loss trick but also labels that are likely to be clean labels selected from uncertainty called Uncertain Aware Co-Training (UACT). Our robust learning techniques (UACT) avoid over-fitting the DNNs by extremely noisy labels. By making better use of the uncertainty acquired from the network itself, we achieve good generalization performance. We compare the proposed method to the current state-of-the-art algorithms for noisy versions of MNIST, CIFAR-10, CIFAR-100, T-ImageNet and News to demonstrate its excellence. (C) 2021 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available