4.5 Article

Defending malware detection models against evasion based adversarial attacks

Journal

PATTERN RECOGNITION LETTERS
Volume 164, Issue -, Pages 119-125

Publisher

ELSEVIER
DOI: 10.1016/j.patrec.2022.10.010

Keywords

Adversarial robustness; Deep neural network; Evasion attack; Malware analysis and detection; Machine learning

Ask authors/readers for more resources

This study validates the vulnerability of machine learning-based malware detection models to adversarial samples and proposes countermeasures to improve their accuracy and resistance. The proposed MalDQN agent achieves a high fooling rate and reduces the accuracy of the malware detection models. The defensive strategies significantly enhance the capability of the models to detect and resist adversarial applications.
The last decade has witnessed a massive malware boom in the Android ecosystem. Literature suggests that artificial intelligence/machine learning based malware detection models can potentially solve this problem. But, these detection models are often vulnerable to adversarial samples developed by malware designers. Therefore, we validate the adversarial robustness and evasion resistance of different malware detection models developed using machine learning in this work. We first designed a neural network agent (MalDQN) based on deep reinforcement learning that adds noise via perturbations to the malware applications and converts them into adversarial malware applications. Malware designers can also generate these samples and use them to perform evasion attacks and fool the malware detection models. The proposed MalDQN agent achieved an average 98% fooling rate against twenty distinct malware detection models based on a variety of classification algorithms (standard, ensemble, and deep neural network) and two different features (android permission and intent). The MalDQN evasion attack reduced the average accuracy from 86.18% to 55.85% in the twenty malware detection models mentioned above. Later, we also developed defensive measures to counter such evasion attacks. Our experimental results show that the proposed defensive strategies considerably improve the capability of different malware detection models to detect adversarial applications and build resistance against them. (c) 2022 Published by Elsevier B.V.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available