4.7 Article

ASCENT: Active Supervision for Semi-Supervised Learning

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2019.2897307

Keywords

Task analysis; Clustering algorithms; Data models; Redundancy; Uncertainty; Semisupervised learning; Active learning; semi-supervised learning; iterative learning; clustering; classification; data filtering

Funding

  1. National Natural Science Foundation of China [61170035, 61272420, 81674099, 61502233]
  2. Fundamental Research Fund for the Central Universities [30916011328, 30918015103, 30918012204]
  3. Nanjing Science and Technology Development Plan Project [201805036, 61403120501]
  4. China Scholarship Council [201706840105]

Ask authors/readers for more resources

Active learning algorithms attempt to overcome the labeling bottleneck by asking queries from large collection of unlabeled examples. Existing batch mode active learning algorithms suffer from three limitations: (1) The methods that are based on similarity function or optimizing certain diversity measurement, in which may lead to suboptimal performance and produce the selected set with redundant examples. (2) The models with assumption on data are hard in finding images that are both informative and representative. (3) The problem of noise labels has been an obstacle for algorithms. In this paper, we propose a novel active learning method that makes embeddings of labeled examples to those of unlabeled ones and back via deep neural networks. The active scheme makes correct association cycles that end up at the same class from that the association was started, which considers both the informativeness and representativeness of examples, as well as being robust to the noise labels. We apply our active learning method to semi-supervised classification and clustering. The submodular function is designed to reduce the redundancy of the selected examples. Specifically, we incorporate our batch mode active scheme into the classification approaches, in which the generalization ability is improved. For semi-supervised clustering, we try to use our active scheme for constraints to make fast convergence and perform better than unsupervised clustering. Finally, we apply our active learning method to data filtering. To validate the effectiveness of the proposed algorithms, extensive experiments are conducted on diversity benchmark datasets for different tasks, i.e., classification, clustering, and data filtering, and the experimental results demonstrate consistent and substantial improvements over the state-of-the-art approaches.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available