4.7 Article

LaplaceNet: A Hybrid Graph-Energy Neural Network for Deep Semisupervised Classification

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2022.3203315

关键词

Data models; Neural networks; Perturbation methods; Deep learning; Complexity theory; Training; Interpolation; Data augmentation; deep learning; graph-based methods; image classification; pseudolabeling; semisupervised learning (SSL)

资金

  1. U.K. Engineering and Physical Sciences Research Council (EPSRC)
  2. National Physical Laboratory (NPL)
  3. Cambridge Mathematics of Information in Healthcare (CMIH), University of Cambridge
  4. Cantab Capital Institute for the Mathematics of Information (CCIMI), University of Cambridge
  5. Philip Leverhulme Prize
  6. Royal Society Wolfson Fellowship
  7. EPSRC [EP/S026045/1, EP/T003553/1, EP/N014588/1, EP/T017961/1]
  8. Wellcome Innovator Award [RG98755]
  9. Leverhulme Trust
  10. European Union Horizon 2020 Research and Innovation Program through the Marie Skodowska-Curie Grant (NoMADS) [777826]
  11. Alan Turing Institute
  12. CCIMI

向作者/读者索取更多资源

Semisupervised learning has gained attention for its ability to reduce the need for labeled data and improve deep semisupervised classification performance. This paper introduces the LaplaceNet framework, which utilizes graph-based pseudolabels and neural network training to achieve state-of-the-art results. The use of multisampling augmentation also enhances generalization.
Semisupervised learning (SSL) has received a lot of recent attention as it alleviates the need for large amounts of labeled data which can often be expensive, requires expert knowledge, and be time consuming to collect. Recent developments in deep semisupervised classification have reached unprecedented performance and the gap between supervised and SSL is ever-decreasing. This improvement in performance has been based on the inclusion of numerous technical tricks, strong augmentation techniques, and costly optimization schemes with multiterm loss functions. We propose a new framework, LaplaceNet, for deep semisupervised classification that has a greatly reduced model complexity. We utilize a hybrid approach where pseudolabels are produced by minimizing the Laplacian energy on a graph. These pseudolabels are then used to iteratively train a neural-network backbone. Our model outperforms state-of-the-art methods for deep semisupervised classification, over several benchmark datasets. Furthermore, we consider the application of strong augmentations to neural networks theoretically and justify the use of a multisampling approach for SSL. We demonstrate, through rigorous experimentation, that a multisampling augmentation approach improves generalization and reduces the sensitivity of the network to augmentation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据