DyNet: The Dynamic Neural Network Toolkit

By Graham Neubig et al
Published on Jan. 15, 2017
Read the original document by opening this link in a new tab.

Table of Contents

Abstract
1 Introduction
2 Static Declaration vs. Dynamic Declaration
2.1 Static Declaration
2.2 Dynamic Declaration
3 Coding Paradigm

Summary

DyNet is a toolkit for implementing neural network models based on dynamic declaration of network structure. Unlike static declaration strategies used in other toolkits, DyNet's dynamic declaration allows for transparent computation graph construction, facilitating the implementation of more complicated network architectures. DyNet is designed to be idiomatic in preferred programming languages (C++ or Python) and offers optimized C++ backend for low overhead graph construction. The paper discusses the advantages of dynamic declaration over static declaration, highlighting the simplicity and efficiency it offers in handling variably sized or structured inputs. DyNet aims to minimize computational cost of graph construction, enabling rapid prototyping and implementation of sophisticated neural network applications. The backend of DyNet is optimized for efficient execution on both CPU and GPU, supporting native use cases like recurrent neural networks and tree-structured neural networks in natural language processing tasks. Additionally, DyNet provides support for mini-batching and data-parallel multi-processing to improve computational efficiency and ease parallelization of models during training.
×
This is where the content will go.