The Principles of Deep Learning Theory

By Daniel A. Roberts et al
Published on Aug. 24, 2021
Read the original document by opening this link in a new tab.

Table of Contents

Contents
Preface
0 Initialization
1 Pretraining
2 Neural Networks
3 Effective Theory of Deep Linear Networks at Initialization
4 RG Flow of Preactivations
5 Effective Theory of Preactivations at Initialization
6 Bayesian Learning
7 Gradient-Based Learning
8 RG Flow of the Neural Tangent Kernel
9 Effective Theory of the NTK at Initialization
10 Kernel Learning
11 Representation Learning
∞ The End of Training
ε Epilogue: Model Complexity from the Macroscopic Perspective
A Information in Deep Learning
B Residual Learning
References
Index

Summary

The Principles of Deep Learning Theory is a research monograph that presents an effective theory approach to understanding neural networks. It covers various topics such as initialization, pretraining, neural networks, effective theories at initialization, Bayesian learning, gradient-based learning, kernel learning, representation learning, and more. The book is written in a pedagogical style, suitable for readers with knowledge of linear algebra, multivariable calculus, and informal probability theory. The authors, led by Daniel A. Roberts, aim to provide a comprehensive understanding of deep learning theory for practitioners and theorists alike.
×
This is where the content will go.