3.8 Proceedings Paper

Learning Fast Sample Re-weighting Without Reward Data

出版社

IEEE
DOI: 10.1109/ICCV48922.2021.00076

关键词

-

向作者/读者索取更多资源

Training sample re-weighting is an effective approach for addressing data biases, but existing methods have limitations such as relying on unbiased reward data and requiring expensive second-order computation. This paper introduces a novel learning-based fast sample re-weighting (FSR) method that overcomes these limitations by learning from history and sharing features to reduce optimization costs. Experimental results show that the proposed FSR method achieves competitive performance in label noise robustness and long-tailed recognition while significantly improving training efficiency.
Training sample re-weighting is an effective approach for tackling data biases such as imbalanced and corrupted labels. Recent methods develop learning-based algorithms to learn sample re-weighting strategies jointly with model training based on the frameworks of reinforcement learning and meta learning. However, depending on additional unbiased reward data is limiting their general applicability. Furthermore, existing learning-based sample re-weighting methods require nested optimizations of models and weighting parameters, which requires expensive second-order computation. This paper addresses these two problems and presents a novel learning-based fast sample re-weighting (FSR) method that does not require additional reward data. The method is based on two key ideas: learning from history to build proxy reward data and feature sharing to reduce the optimization cost. Our experiments show the proposed method achieves competitive results compared to state of the arts on label noise robustness and long-tailed recognition, and does so while achieving significantly improved training efficiency. The source code is publicly available at https://github.com/google-research/google-research/tree/master/ieg.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据