Aggregated Residual Transformations for Deep Neural Networks

By S. Xie et al
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Related Work
3. Method
3.1. Template
3.2. Revisiting Simple Neurons
3.3. Aggregated Transformations
3.4. Model Capacity

Summary

The paper presents a network architecture for image classification based on aggregated residual transformations, emphasizing the importance of cardinality in addition to depth and width. The design simplifies network engineering by repeating building blocks with the same topology. Experimental results demonstrate improved classification accuracy with increased cardinality, outperforming existing models like ResNet and Inception on various datasets. The method maintains complexity while enhancing accuracy, highlighting the significance of cardinality as a key dimension in neural networks.
×
This is where the content will go.