4.6 Article

Smoothing Adversarial Training for GNN

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSS.2020.3042628

Keywords

Training; Smoothing methods; Robustness; Topology; Data models; Task analysis; Predictive models; Adversarial attack; adversarial training; complex network; cross-entropy loss; smoothing distillation (SD)

Funding

  1. National Natural Science Foundation of China [62072406, 61973273]
  2. Natural Science Foundation of Zhejiang Provincial [LY19F020025, LR19F030001]
  3. Major Special Funding for Science and Technology Innovation 2025 in Ningbo [2018B10063]
  4. Key Laboratory of the Public Security Ministry Open Project [2020DSJSYS001]

Ask authors/readers for more resources

This study introduces a smoothing adversarial training (SAT) method to improve the robustness of Graph Neural Networks (GNNs) by smoothing the gradients of the Graph Convolutional Network (GCN), reducing the amplitude of adversarial gradients for defense against global and target label node attacks. Comprehensive experiments demonstrate that SAT method exhibits state-of-the-art defensibility against various adversarial attacks on node classification and community detection.
Recently, a graph neural network (GNN) was proposed to analyze various graphs/networks, which has been proven to outperform many other network analysis methods. However, it is also shown that such state-of-the-art methods suffer from adversarial attacks, i.e., carefully crafted adversarial networks with slight perturbation on clean one may invalid these methods on lots of applications, such as network embedding, node classification, link prediction, and community detection. Adversarial training has been testified as an efficient defense strategy against adversarial attacks in computer vision and graph mining. However, almost all the algorithms based on adversarial training focus on global defense through overall adversarial training. In a more practical scene, certain users would be targeted to attack, i.e., specific labeled users. It is still a challenge to defend against target node attack by existing adversarial training methods. Therefore, we propose smoothing adversarial training (SAT) to improve the robustness of GNNs. In particular, we analytically investigate the robustness of graph convolutional network (GCN), one of the classic GNNs, and propose two smooth defensive strategies: smoothing distillation and smoothing cross-entropy loss function. Both of them smooth the gradients of GCN and, consequently, reduce the amplitude of adversarial gradients, benefiting gradient masking from attackers in both global attack and target label node attack. The comprehensive experiments on five real-world networks testify that the proposed SAT method shows state-of-the-art defensibility against different adversarial attacks on node classification and community detection. Especially, the average attack success rate of different attack methods can be decreased by about 40% by SAT at the cost of tolerable embedding performance decline of the original network.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available