Towards an Integration of Deep Learning and Neuroscience

By Adam H. Marblestone et al
Published on June 13, 2016
Read the original document by opening this link in a new tab.

Table of Contents

1 Introduction
1.1 Hypothesis 1 – The brain optimizes cost functions
1.2 Hypothesis 2 – Cost functions are diverse across areas and change over development
1.3 Hypothesis 3 – Specialized systems allow efficiently solving key computational problems
2 The brain can optimize cost functions
2.1 Local self-organization and optimization without multi-layer credit assignment
2.2 Biological implementation of optimization
2.3 Alternative mechanisms for learning
3 The cost functions are diverse across brain areas and time
3.1 How cost functions may be represented and applied
3.2 Cost functions for unsupervised learning
3.3 Cost functions for supervised learning
4 Optimization occurs in the context of specialized structures
4.1 Structured forms of memory
4.2 Structured routing systems
4.3 Structured state representations to enable efficient algorithms
4.4 Other specialized structures
5 Machine learning inspired neuroscience
5.1 Hypothesis 1– Existence of cost functions
5.2 Hypothesis 2– Biological fine-structure of cost functions
5.3 Hypothesis 3– Embedding within a pre-structured architecture
6 Neuroscience inspired machine learning
6.1 Hypothesis 1– Existence of cost functions
6.2 Hypothesis 2– Biological fine-structure of cost functions
6.3 Hypothesis 3– Embedding within a pre-structured architecture
7 Did evolution separate cost functions from optimization algorithms?
8 Conclusions

Summary

Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) these cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.
×
This is where the content will go.