4.6 Article

Hardening against adversarial examples with the smooth gradient method

期刊

SOFT COMPUTING
卷 22, 期 10, 页码 3203-3213

出版社

SPRINGER
DOI: 10.1007/s00500-017-2998-4

关键词

-

资金

  1. NVIDIA Corporation

向作者/读者索取更多资源

Commonly used methods in deep learning do not utilise transformations of the residual gradient available at the inputs to update the representation in the dataset. It has been shown that this residual gradient, which can be interpreted as the first-order gradient of the input sensitivity at a particular point, may be used to improve generalisation in feed-forward neural networks, including fully connected and convolutional layers. We explore how these input gradients are related to input perturbations used to generate adversarial examples and how the networks that are trained with this technique are more robust to attacks generated with the fast gradient sign method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据