Deep Learning

By Sargur Srihari et al
Read the original document by opening this link in a new tab.

Table of Contents

Topics in Autoencoders
1. Undercomplete Autoencoders
2. Regularized Autoencoders
3. Representational Power, Layout Size and Depth
4. Stochastic Encoders and Decoders
5. Denoising Autoencoders
6. Learning Manifolds and Autoencoders
7. Contractive Autoencoders
8. Predictive Sparse Decomposition
9. Applications of Autoencoders
Example of Noise in a DAE
DAE Training procedure
DAE for MNIST data
Estimating the Score
DAE learns a vector field
Vector field learnt by a DAE

Summary

Denoising Autoencoders are a key topic in Deep Learning. They are autoencoders that receive corrupted data points as input and are trained to predict the original, uncorrupted data points as output. By minimizing a loss function, these autoencoders can effectively undo the corruption in the data. Examples of noise in Denoising Autoencoders are discussed, along with the training procedure and specific applications like MNIST data. Additionally, the document explores how DAEs learn vector fields and estimate scores to better model the underlying data distribution.
×
This is where the content will go.