Graph Attention Networks

By Petar Veličković et al
Published on June 10, 2018
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. GAT Architecture
2.1 Graph Attentional Layer
2.2 Comparisons to Related Work
3. Evaluation
3.1 Datasets

Summary

Published as a conference paper at ICLR 2018, Graph Attention Networks (GATs) are novel neural network architectures that operate on graph-structured data. They leverage masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By enabling nodes to attend over their neighborhoods' features, GATs specify different weights to nodes without costly matrix operations or upfront knowledge of the graph structure. GAT models have achieved state-of-the-art results across various graph benchmarks, demonstrating their potential for inductive and transductive problems.
×
This is where the content will go.