3.8 Proceedings Paper

Adversarial Attacks on Graph Neural Networks via Node Injections: A Hierarchical Reinforcement Learning Approach

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3366423.3380149

Keywords

Adversarial Attack; Graph Poisoning; Reinforcement learning

Funding

  1. NIH NCATS [UL1 TR002014]
  2. NSF [1518732, 1640834, 1636795]
  3. Edward Frymoyer Endowed Professorship at Pennsylvania State University
  4. Sudha Murty Distinguished Visiting Chair in Neurocomputing and Data Science - Pratiksha Trust at the Indian Institute of Science
  5. Samsung GRO Award [225003]
  6. Division of Computing and Communication Foundations
  7. Direct For Computer & Info Scie & Enginr [1518732] Funding Source: National Science Foundation

Ask authors/readers for more resources

Graph Neural Networks (GNN) offer the powerful approach to node classification in complex networks across many domains including social media, E-commerce, and FinTech. However, recent studies show that GNNs are vulnerable to attacks aimed at adversely impacting their node classification performance. Existing studies of adversarial attacks on GNN focus primarily on manipulating the connectivity between existing nodes, a task that requires greater effort on the part of the attacker in real-world applications. In contrast, it is much more expedient on the part of the attacker to inject adversarial nodes, e.g., fake profiles with forged links, into existing graphs so as to reduce the performance of the GNN in classifying existing nodes. Hence, we consider a novel form of node injection poisoning attacks on graph data. We model the key steps of a node injection attack, e.g., establishing links between the injected adversarial nodes and other nodes, choosing the label of an injected node, etc. by a Markov Decision Process. We propose a novel reinforcement learning method for Node Injection Poisoning Attacks (NIPA), to sequentially modify the labels and links of the injected nodes, without changing the connectivity between existing nodes. Specifically, we introduce a hierarchical Q-learning network to manipulate the labels of the adversarial nodes and their links with other nodes in the graph, and design an appropriate reward function to guide the reinforcement learning agent to reduce the node classification performance of GNN. The results of the experiments show that NIPA is consistently more effective than the baseline node injection attack methods for poisoning graph data on three benchmark datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available