Neural Architecture Optimization

By Renqian Luo et al
Published on Sept. 4, 2019
Read the original document by opening this link in a new tab.

Table of Contents

1 Introduction
2 Related Work
3 Approach

Summary

Automatic design of neural network architecture without human intervention has been the interests of the community. This paper proposes a method called Neural Architecture Optimization (NAO) which optimizes network architecture by mapping architectures into a continuous vector space and conducting optimization via gradient based method. The core components of NAO include an encoder, a performance predictor, and a decoder. The encoder maps neural network architectures into a continuous space, the performance predictor predicts the accuracy of an architecture, and the decoder recovers the discrete architecture from its continuous representation. NAO shows promising results on tasks such as image classification and language modeling. The training and inference processes of NAO are detailed, showcasing how better architectures can be obtained through continuous optimization.
×
This is where the content will go.