4.8 Article

Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2018.2858821

关键词

Semi-supervised learning; supervised learning; robustness; adversarial training; adversarial examples; deep learning

资金

  1. New Energy and Industrial Technology Development Organization (NEDO), Japan

向作者/读者索取更多资源

We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only virtually adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VATachieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据