3.8 Proceedings Paper

Improving Deep Learning Model Robustness Against Adversarial Attack by Increasing the Network Capacity

出版社

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-21101-0_7

关键词

Machine learning; Deep learning; Security; Measurement; Perturbation methods; Robustness

向作者/读者索取更多资源

This paper examines the security issues in Deep Learning and conducts experiments to explore ways to enhance the resilience of DL models against adversarial attacks. The results demonstrate improvements and offer new insights that can guide researchers and practitioners in developing more robust DL algorithms.
Nowadays, we are more and more reliant on Deep Learning (DL) models and thus it is essential to safeguard the security of these systems. This paper explores the security issues in Deep Learning and analyses, through the use of experiments, the way forward to build more resilient models. Experiments are conducted to identify the strengths and weaknesses of a new approach to improve the robustness of DL models against adversarial attacks. The results show improvements and new ideas that can be used as recommendations for researchers and practitioners to create increasingly better DL algorithms.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据