Deepening Neural Algorithmic Reasoning

By Petar Veličković et al
Published on Dec. 10, 2022
Read the original document by opening this link in a new tab.

Table of Contents

A brief description of alignment (Xu et al.’s definition and beyond)
Convolutions, Polynomials, and Integral Transforms
Code Examples
Further Theory (monads!)
What is a Convolution?
Integral Transforms
Alignment Code Examples
Type-checking GNNs
Monads, Mo’ Problems

Summary

In this tutorial, the document explores the concept of algorithmic alignment, focusing on the relationship between networks and reasoning tasks. The presentation covers various topics such as convolutions, polynomials, integral transforms, code examples, and further theoretical aspects. It delves into the idea of alignment proposed by Xu et al., discussing how networks can be aligned to reasoning functions through modular decomposition. The document also addresses challenges and considerations related to alignment, including update functions, sparse updates, and multiple algorithm alignments. Additionally, it introduces the concept of convolution, integral transforms, and provides insights into the alignment of GNNs to dynamic programming. The presentation concludes with discussions on monads in message passing and the importance of semirings in feature space.
×
This is where the content will go.