4.8 Article

ResLT: Residual Learning for Long-Tailed Recognition

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2022.3174892

关键词

Tail; Head; Training; Magnetic heads; Image recognition; Transfer learning; Representation learning; Residual learning; imbalanced learning; long-tailed recognition

向作者/读者索取更多资源

This paper proposes a method to address the long-tailed data distribution problem by preserving specific capacity for low-frequency classes in the parameter space. Instead of using different branches for different classes, the authors design an effective residual fusion mechanism to enhance the recognition of medium+tail and tail classes. Experimental results demonstrate the effectiveness of this method.
Deep learning algorithms face great challenges with long-tailed data distribution which, however, is quite a common case in real-world scenarios. Previous methods tackle the problem from either the aspect of input space (re-sampling classes with different frequencies) or loss space (re-weighting classes with different weights), suffering from heavy over-fitting to tail classes or hard optimization during training. To alleviate these issues, we propose a more fundamental perspective for long-tailed recognition, i.e., from the aspect of parameter space, and aims to preserve specific capacity for classes with low frequencies. From this perspective, the trivial solution utilizes different branches for the head, medium, tail classes respectively, and then sums their outputs as the final results is not feasible. Instead, we design the effective residual fusion mechanism - with one main branch optimized to recognize images from all classes, another two residual branches are gradually fused and optimized to enhance images from medium+tail classes and tail classes respectively. Then the branches are aggregated into final results by additive shortcuts. We test our method on several benchmarks, i.e., long-tailed version of CIFAR-10, CIFAR-100, Places, ImageNet, and iNaturalist 2018. Experimental results manifest the effectiveness of our method. Our code is available at https://github.com/jiequancui/ResLT.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据