Relational Inductive Biases, Deep Learning, and Graph Networks

By Peter W. Battaglia et al
Published on Oct. 17, 2018
Read the original document by opening this link in a new tab.

Table of Contents

Abstract
1 Introduction
2 Relational Inductive Biases
Box 1: Relational Reasoning
Box 2: Inductive Biases
3 Component Entities Relations Rel.
4 Invariance Fully connected Units All-to-all Weak
5 Convolutional Grid elements Local Locality Spatial translation
6 Recurrent Timesteps Sequential Sequentiality Time translation
7 Graph network Nodes Edges Arbitrary Node, edge permutations

Summary

Artificial intelligence (AI) has made significant progress in vision, language, control, and decision-making domains. The paper argues for combinatorial generalization as a priority in AI to achieve human-like abilities. It emphasizes the importance of structured representations and computations in achieving this objective. The concept of relational inductive biases within deep learning architectures is introduced, focusing on learning about entities, relations, and rules for composing them. The paper presents the graph network as a building block with a strong relational inductive bias, enabling relational reasoning and combinatorial generalization. It discusses the significance of integrating structured knowledge for more sophisticated reasoning patterns. The paper also provides an open-source software library for building graph networks. The authors advocate for a holistic approach in AI that combines the strengths of structured and end-to-end learning methods to advance towards human-like intelligence.
×
This is where the content will go.