3.8 Proceedings Paper

Improving Deep Learning Model Robustness Against Adversarial Attack by Increasing the Network Capacity

Publisher

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-21101-0_7

Keywords

Machine learning; Deep learning; Security; Measurement; Perturbation methods; Robustness

Ask authors/readers for more resources

This paper examines the security issues in Deep Learning and conducts experiments to explore ways to enhance the resilience of DL models against adversarial attacks. The results demonstrate improvements and offer new insights that can guide researchers and practitioners in developing more robust DL algorithms.
Nowadays, we are more and more reliant on Deep Learning (DL) models and thus it is essential to safeguard the security of these systems. This paper explores the security issues in Deep Learning and analyses, through the use of experiments, the way forward to build more resilient models. Experiments are conducted to identify the strengths and weaknesses of a new approach to improve the robustness of DL models against adversarial attacks. The results show improvements and new ideas that can be used as recommendations for researchers and practitioners to create increasingly better DL algorithms.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available