期刊
IEEE INTERNET OF THINGS JOURNAL
卷 10, 期 10, 页码 8432-8444出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2022.3188583
关键词
Malware; Internet of Things; Codes; Feature extraction; Detectors; Deep learning; Classification algorithms; Android; deep learning; generative adversarial network (GAN); graph neural network (GNN); Internet of Things (IoT); machine learning
This study demonstrates the effectiveness of graph-based deep learning for detecting malicious Android apps and proposes a generative adversarial network algorithm to attack this detection method. Experimental analysis shows that the proposed algorithm can effectively reduce the detection rate of malicious apps and retraining the model helps combat adversarial attacks.
Since the Internet of Things (IoT) is widely adopted using Android applications, detecting malicious Android apps is essential. In recent years, Android graph-based deep learning research has proposed many approaches to extract relationships from the application as a graph to generate graph embeddings. First, we demonstrate the effectiveness of graph-based classification using graph neural networks (GNNs)-based classifier to generate API graph embedding. The graph embedding is used with Permission and Intent to train multiple machine learning and deep learning algorithms to detect Android malware. The classification achieved an accuracy of 98.33% in CICMaldroid and 98.68% in the Drebin data set. However, the graph-based deep learning is vulnerable as an attacker can add fake relationships to avoid detection by the classifier. Second, we propose a generative adversarial network (GAN)-based algorithm named VGAE-MalGAN to attack the graph-based GNN Android malware classifier. The VGAE-MalGAN generator generates adversarial malware API graphs, and the VGAE-MalGAN substitute detector (SD) tries to fit the detector. Experimental analysis shows that VGAE-MalGAN can effectively reduce the detection rate of GNN malware classifiers. Although the model fails to detect adversarial malware, experimental analysis shows that retraining the model with generated adversarial samples helps to combat adversarial attacks.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据