4.6 Article

Weighted Pseudo Labeled Data and Mutual Learning for Semi-Supervised Classification

期刊

IEEE ACCESS
卷 9, 期 -, 页码 36522-36534

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3063176

关键词

Training; Feature extraction; Data models; Semisupervised learning; Classification algorithms; Gallium nitride; Biological system modeling; Image classification; semi-supervised learning; weighted pseudo labeled data; mutual learning

资金

  1. Natural Science Foundation of China [62001133, 61661017, 61967005, U1501252]
  2. Natural Science Foundation of Guangxi Province [2017GXNSFBA198212]
  3. Key Laboratory of Cognitive Radio and Information Processing, Ministry of Education [CRKL150103]
  4. Innovation Project of GUET Graduate Education [2019YCXS020]

向作者/读者索取更多资源

A semi-supervised classification algorithm involving weighted pseudo labeled data and mutual learning is proposed to improve classification performance and rectify incorrect pseudo labels. By utilizing selection, weighting, and mutual learning strategies, the accuracy of pseudo label predictions is effectively improved.
In this article, a semi-supervised classification algorithm that is based on weighted pseudo labeled data and mutual learning is proposed. The purpose of our method is to improve the classification performance of semi-supervised learning models and rectify the incorrect pseudo labels in the process of training. Specifically, the algorithm is built with a deep convolutional neural network and an ensemble learning model. First, output smearing is employed to construct different training sets and perform model initialization. The pseudo labels of unlabeled data are inferred by network predictions. Second, based on selection and weighting strategies for pseudo labeled data, pseudo labeled data with high confidence are selected and added to the real labeled training set. Accordingly, the model is retrained on the weighted pseudo labeled data. Last, a mutual learning strategy is applied to enhance the prediction consistency among classifiers. Furthermore, diversity fine tuning and mutual learning are performed alternately to determine the optimal balance between diversity and consistency, which consequently improves the accuracy of the pseudo label predictions. The experimental results on three benchmark datasets, namely, MNIST, CIFAR10 and SVHN, demonstrate that the proposed method effectively rectifies the incorrect pseudo labels. Notably, the method achieves the best performance compared with state-of-the-art semi-supervised classification methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据