Sparsely Activated Networks

By P. Bizopoulos et al
Read the original document by opening this link in a new tab.

Table of Contents

Abstract; Index Terms; I. Introduction; II. φMetric; III. Sparsely Activated Networks; A. Sparse Activation Functions; 1) Identity; 2) ReLU; 3) top-k absolutes; 4) Extrema-Pool indices; 5) Extrema; B. SAN Architecture/Training

Summary

This paper introduces Sparsely Activated Networks (SANs) that evaluate unsupervised models based on their compression ratio and reconstruction accuracy. It presents the φmetric to measure model complexity. SANs consist of kernels with shared weights and sparse activation functions, aiming to minimize φ. The paper discusses the trade-off between the reconstruction error and compression ratio of representations in neural networks. It defines five sparse activation functions, including Identity, ReLU, top-k absolutes, Extrema-Pool indices, and Extrema. The architecture and training procedure of SANs are detailed, focusing on convolutional filters and activation maps. The paper concludes with experimental results on various datasets and discusses the implications and limitations of SANs.
×
This is where the content will go.