4.6 Article

Derivative-free optimization adversarial attacks for graph convolutional networks

Journal

PEERJ COMPUTER SCIENCE
Volume 7, Issue -, Pages -

Publisher

PEERJ INC
DOI: 10.7717/peerj-cs.693

Keywords

Graph convolutional network; Adversarial attack; Derivative-free optimization; Machine learning

Funding

  1. National Natural Science Foundation of China [62002332]

Ask authors/readers for more resources

Recent research has shown that graph convolutional networks are vulnerable to adversarial attacks, and this paper proposes a black-box adversarial attack framework based on derivative-free optimization to generate graph adversarial examples. By using advanced DFO algorithms and redesigning the perturbation vector, the framework can achieve better attack performance compared to existing methods, demonstrating the potential of DFO methods in node classification adversarial attacks.
In recent years, graph convolutional networks (GCNs) have emerged rapidly due to their excellent performance in graph data processing. However, recent researches show that GCNs are vulnerable to adversarial attacks. An attacker can maliciously modify edges or nodes of the graph to mislead the model's classification of the target nodes, or even cause a degradation of the model's overall classification performance. In this paper, we first propose a black-box adversarial attack framework based on derivative-free optimization (DFO) to generate graph adversarial examples without using gradient and apply advanced DFO algorithms conveniently. Second, we implement a direct attack algorithm (DFDA) using the Nevergrad library based on the framework. Additionally, we overcome the problem of large search space by redesigning the perturbation vector using constraint size. Finally, we conducted a series of experiments on different datasets and parameters. The results show that DFDA outperforms Nettack in most cases, and it can achieve an average attack success rate of more than 95% on the Cora dataset when perturbing at most eight edges. This demonstrates that our framework can fully exploit the potential of DFO methods in node classification adversarial attacks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available