4.7 Article

How to handle noisy labels for robust learning from uncertainty

期刊

NEURAL NETWORKS
卷 143, 期 -, 页码 209-217

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2021.06.012

关键词

Deep network; Noisy labels; Robust learning; Uncertain aware joint training

资金

  1. Basic Science Research Program through the National Research Foundation of Korea (NRF) - Ministry of Education [NRF-2019R1I1A3A02058096, NRF-2020R1A6A1A12047945]
  2. Brain Research Program through the National Research Foundation of Korea (NRF) - Ministry of Science, ICT & Future Planning [NRF-2017M3C7A1044815]
  3. Grand Information Technology Research Center support program [IITP-2021-2020-0-01462]
  4. INHA UNIVERSITY Research Grant, South Korea

向作者/读者索取更多资源

This paper investigates the robust training problem of deep neural networks with noisy labels and proposes a novel training method called UACT, which combines uncertainty with clean labels to avoid overfitting issues and achieve good generalization performance.
Most deep neural networks (DNNs) are trained with large amounts of noisy labels when they are applied. As DNNs have the high capacity to fit any noisy labels, it is known to be difficult to train DNNs robustly with noisy labels. These noisy labels cause the performance degradation of DNNs due to the memorization effect by over-fitting. Earlier state-of-the-art methods used small loss tricks to efficiently resolve the robust training problem with noisy labels. In this paper, relationship between the uncertainties and the clean labels is analyzed. We present novel training method to use not only small loss trick but also labels that are likely to be clean labels selected from uncertainty called Uncertain Aware Co-Training (UACT). Our robust learning techniques (UACT) avoid over-fitting the DNNs by extremely noisy labels. By making better use of the uncertainty acquired from the network itself, we achieve good generalization performance. We compare the proposed method to the current state-of-the-art algorithms for noisy versions of MNIST, CIFAR-10, CIFAR-100, T-ImageNet and News to demonstrate its excellence. (C) 2021 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据