Fractal Net: Ultra-Deep Neural Networks Without Residuals

By Gustav Larsson et al
Published on May 26, 2017
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Related Work
3. Fractal Networks
3.1 Regularization via Drop-Path
3.2 Data Augmentation
3.3 Implementation Details

Summary

The document introduces Fractal Net, a design strategy for neural network macro-architecture based on self-similarity. It presents an alternative to ResNet and demonstrates that residual learning is not necessary for building ultra-deep neural networks. Fractal networks exhibit an anytime property where shallow subnetworks provide quick but moderately accurate results, while deeper subnetworks offer more accuracy with higher latency. The paper discusses connections between Fractal Net and previous deep network designs, as well as the development of drop-path regularization for fractal networks. Experimental comparisons to residual networks across different datasets show the effectiveness of fractal networks even without data augmentation. Implementation details using Caffe and training methodologies are also provided.
×
This is where the content will go.