The Forward-Forward Algorithm: Some Preliminary Investigations

By Geoffrey Hinton et al
Published on Dec. 27, 2022
Read the original document by opening this link in a new tab.

Table of Contents

1. What is wrong with backpropagation
2. The Forward-Forward Algorithm
2.1 Learning multiple layers of representation with a simple layer-wise goodness function
3. Some experiments with FF
3.1 The backpropagation baseline
3.2 A simple unsupervised example of FF
3.3 A simple supervised example of FF

Summary

The aim of this paper is to introduce a new learning procedure for neural networks and to demonstrate that it works well enough on a few small problems to be worth further investigation. The Forward-Forward algorithm replaces the forward and backward passes of backpropagation by two forward passes, one with positive (i.e.real) data and the other with negative data. The main point of this paper is to show that neural networks containing unknown non-linearities do not need to resort to reinforcement learning. The Forward-Forward algorithm (FF) is comparable in speed to backpropagation but has the advantage that it can be used when the precise details of the forward computation are unknown. It also has the advantage that it can learn while pipelining sequential data through a neural network without ever storing the neural activities or stopping to propagate error derivatives. The forward-forward algorithm is somewhat slower than backpropagation and does not generalize quite as well on several of the toy problems investigated in this paper so it is unlikely to replace backpropagation for applications where power is not an issue. The two areas in which the forward-forward algorithm may be superior to backpropagation are as a model of learning in cortex and as a way of making use of very low-power analog hardware without resorting to reinforcement learning.
×
This is where the content will go.