Learning To Optimize Quantum Neural Network Without Gradients
By Ankit Kulshrestha et al
Read the original document by opening this link in a new tab.
Table of Contents
Abstract
I. Introduction
II. Quantum Neural Networks
III. Meta-optimization Framework
IV. Our Algorithm
V. Theoretical Performance
Summary
Quantum Machine Learning is an emerging sub-field where the goal is to perform pattern recognition tasks by encoding data into quantum states. This paper introduces a training algorithm that does not rely on gradient information, proposing a meta-optimization algorithm for training Quantum Neural Networks (QNNs) without gradients. The algorithm aims to achieve better quality minima in fewer circuit evaluations than existing gradient-based algorithms. It addresses the challenges posed by current quantum hardware limitations and the inefficiency of gradient computations for QNNs. By utilizing a novel meta-optimization approach, the paper demonstrates significant speedup in training time and comparable minima quality to conventional methods. The algorithm is the first of its kind for training QNNs without explicit gradient computation, providing a blueprint for gradient-free meta-optimization algorithms.