4.7 Article

Extending version-space theory to multi-label active learning with imbalanced data

Journal

PATTERN RECOGNITION
Volume 142, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2023.109690

Keywords

Multi -label active learning; Sample -label pairs; Inconsistency; Version space; Imbalanced data

Ask authors/readers for more resources

Version space is a crucial concept in supervised learning, but its application in multi-label active learning has not been explored. This paper extends the version space theory from single-label scenario to multi-label scenario, establishes a spatial structure for the multi-label version space, and proposes a simplified representation and a new multi-label active learning algorithm. The algorithm is further enhanced by addressing the issue of class imbalance in multi-label data. Experimental comparisons demonstrate the feasibility and effectiveness of the proposed methods.
Version space, defined as the subset of the hypothesis space consistent with the training samples, is an important concept in supervised learning. It has been successfully applied for evaluating the informativeness of unlabeled samples in traditional single-label active learning. Specifically, the most inconsistent samples among the version space members can reduce the size of the version space as fast as possible, these samples are given high priority for domain expert annotation, thereby the learner can construct a high-performance classifier by labeling as few samples as possible. We point out that the concept of version space has not been extended to multi-label environments yet, which hinders its application in multi-label active learning. This paper makes an attempt to extend the version space theory from single-label scenario to multi-label scenario, builds up a spatial structure for the multi-label version space, generalizes it from finite case to infinite case, puts forward a simplified representation for it and accordingly proposes a new multi-label active learning algorithm. Moreover, considering the imbalance issue in multi-label data, the algorithm is further improved by allocating different annotation numbers to the labels. Experimental comparisons verify the feasibility and effectiveness of the proposed methods.(c) 2023 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available