Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies

By Wei Jiny et al
Read the original document by opening this link in a new tab.

Table of Contents

1. INTRODUCTION
2. PRELIMINARIES AND DEFINITIONS
3. TAXONOMY OF GRAPH ADVERSARIAL ATTACKS
3.1 Attacker’s Capacity
3.2 Perturbation Type
3.3 Attacker’s Goal
3.4 Attacker’s Knowledge
3.5 Victim Models
4. GRAPH ADVERSARIAL ATTACKS
4.1 White-box Attacks
4.1.1 Targeted Attack
4.1.2 Untargeted Attack
4.2 Gray-box Attacks
4.2.1 Targeted Attack
4.2.2 Untargeted Attack
4.3 Black-box Attacks
4.3.1 Targeted Attack
4.3.2 Untargeted Attack
5. DEFENSE STRATEGIES
6. EMPIRICAL STUDIES
7. FUTURE RESEARCH DIRECTIONS

Summary

Deep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks. As the extensions of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability. Adversary can mislead GNNs to give wrong predictions by modifying the graph structure such as manipulating a few edges. This vulnerability has arisen tremendous concerns for adapting GNNs in safety-critical applications and has attracted increasing research attention in recent years. Thus, it is necessary and timely to provide a comprehensive overview of existing graph adversarial attacks and the countermeasures.
×
This is where the content will go.