Hope: High-Order Polynomial Expansion of Black-Box Neural Networks

By Tingxiong Xiao et al
Read the original document by opening this link in a new tab.

Table of Contents

Abstract; Introduction; High-Order Derivative Rule for Composite Functions; High-Order Derivative Rule for Neural Networks; Output Unit; Fully Connected Layer; Convolutional Layer; Nonlinear Activation Layer

Summary

This paper introduces HOPE (High-Order Polynomial Expansion), a method for expanding neural networks into high-order Taylor polynomials on a reference input. It addresses the issue of neural networks being 'black boxes' by providing explicit local interpretations. The method demonstrates high accuracy, low computational complexity, and good convergence. The document discusses the derivation of high-order derivative rules for composite functions and neural networks, focusing on different modules such as output unit, fully connected layer, convolutional layer, and nonlinear activation layer. Numerical analysis and experimental results showcase the effectiveness and wide applications of HOPE in function discovery, fast inference, and feature selection.
×
This is where the content will go.