Read the original document by opening this link in a new tab.
Table of Contents
1. Introduction
2. Complementary priors
3. Restricted Boltzmann machines and contrastive divergence learning
4. A greedy learning algorithm for transforming representations
5. Back-Fitting with the up-down algorithm
6. Performance on the MNIST database
Summary
The document discusses a fast learning algorithm for deep belief nets. It introduces the concept of complementary priors to eliminate explaining away effects, describes the equivalence between RBMs and infinite directed networks, and presents a greedy algorithm for transforming representations. It also covers back-fitting with the up-down algorithm and performance on the MNIST database.