4.7 Article

SELF-LLP: Self-supervised learning from label proportions with self-ensemble

Journal

PATTERN RECOGNITION
Volume 129, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2022.108767

Keywords

Learning from label proportion; Self-supervised learning; Self-ensemble strategy; Multi-task learning

Funding

  1. National Natural Science Foun-dation of China [61702099]
  2. UIBE Excellent Young Scholar Project [21YQ10]

Ask authors/readers for more resources

In this paper, a multi-task pipeline called SELF-LLP is proposed for learning from label proportions (LLP) problem. By leveraging self-supervised learning and self-ensemble strategy, the method makes full use of the information contained in the data and model themselves, leading to improved classification performance and training efficiency.
In this paper, we tackle the problem called learning from label proportions (LLP), where the training data is arranged into various bags, with only the proportions of different categories in each bag available. Existing effort s mainly f ocus on training a model with only the limited proportion information in a weakly supervised manner, thus result in apparent performance gap to supervised learning, as well as computational inefficiency. In this work, we propose a multi-task pipeline called SELF-LLP to make full use of the information contained in the data and model themselves. Specifically, to intensively learn representation from the data, we leverage the self-supervised learning as a plug-in auxiliary task to learn better transferable visual representation. The main insight is to benefit from the self-supervised representation learning with deep model, as well as improving classification performance by a large margin. Meanwhile, in order to better leverage the implicit benefits from the model itself, we incorporate the self-ensemble strategy to guide the training process with an auxiliary supervision information, which is constructed by aggregating multiple previous network predictions. Furthermore, a ramp-up mechanism is further employed to stabilize the training process. In the extensive experiments, our method demonstrates compelling advantages in both accuracy and efficiency over several state-of-the-art LLP approaches. (c) 2022 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available