Deep Optimisation: Solving Combinatorial Optimisation Problems Using Deep Neural Networks
By J.R. Caldwell et al
Published on Nov. 2, 2018
Read the original document by opening this link in a new tab.
Table of Contents
Abstract
Introduction
Model Building Optimisation Algorithms
The Deep Optimisation Algorithm
Model Optimisation Cycle
Solution Optimisation Cycle
Transition: Searching in Combinations of Features
Performance Analysis of Deep Optimisation
How Deep Optimisation Works
Summary
Deep Optimisation (DO) combines evolutionary search with Deep Neural Networks (DNNs) in a novel way, not for optimising a learning algorithm, but for finding a solution to an optimisation problem. Deep learning has been successfully applied to classification, regression, decision and generative tasks, and this paper extends its application to solving optimisation problems. The Hierarchical Transformation Optimisation Problem (HTOP) and the Parity Modular Constraint Problem (MC parity) are used to demonstrate the performance of DO. DO utilises a deep multi-layered feed-forward neural network within the Model-Building Optimisation Algorithms framework. The algorithm consists of two optimisation cycles - a solution optimisation cycle and a model optimisation cycle - interlocked in a two-way relationship. The AE model is used for training and generating reconstructions from the hidden layer. DO employs a layer-wise approach for both training and generating samples. The solution optimisation cycle produces locally optimal solutions guided by model-informed variation. The HTOP is a consistent constraint problem designed to show how DO works with specific regards to rescaling the variation operator and using a layerwise procedure. DO demonstrates the ability to solve problems containing high-order dependencies that state of the art methods cannot.