Journal
ADVANCES IN CYBERSECURITY, CYBERCRIMES, AND SMART EMERGING TECHNOLOGIES
Volume 4, Issue -, Pages 85-96Publisher
SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-21101-0_7
Keywords
Machine learning; Deep learning; Security; Measurement; Perturbation methods; Robustness
Ask authors/readers for more resources
This paper examines the security issues in Deep Learning and conducts experiments to explore ways to enhance the resilience of DL models against adversarial attacks. The results demonstrate improvements and offer new insights that can guide researchers and practitioners in developing more robust DL algorithms.
Nowadays, we are more and more reliant on Deep Learning (DL) models and thus it is essential to safeguard the security of these systems. This paper explores the security issues in Deep Learning and analyses, through the use of experiments, the way forward to build more resilient models. Experiments are conducted to identify the strengths and weaknesses of a new approach to improve the robustness of DL models against adversarial attacks. The results show improvements and new ideas that can be used as recommendations for researchers and practitioners to create increasingly better DL algorithms.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available