Learning Transferable Architectures for Scalable Image Recognition

By Barret Zoph et al
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Related Work
3. Method
4. Experiments and Results

Summary

This paper presents a method to learn neural network image classification models directly on the dataset of interest, reducing the need for extensive architecture engineering. The approach involves searching for architectural building blocks on a small dataset and transferring them to a larger dataset, leading to the development of a new search space called the 'NASNet search space'. The experiments demonstrate the effectiveness of this approach by achieving state-of-the-art accuracy on ImageNet. The paper also introduces a new regularization technique called ScheduledDropPath, which improves generalization in NASNet models. Overall, the learned architectures, termed NASNets, show superior performance compared to human-designed models while reducing computational demand. The features learned by NASNets are also shown to be transferable to other computer vision tasks such as object detection.
×
This is where the content will go.