Contextuality and Inductive Bias in Quantum Machine Learning

By Joseph B et al
Published on April 18, 2023
Read the original document by opening this link in a new tab.

Table of Contents

1 Introduction
2 Generalised contextuality
3 Contextuality of multi-task machine learning models
4 Inductive bias and limits of expressivity
5 Learning the rock, paper, scissors game
6 Contextuality of general learning models
7 Encoding inductive bias into quantum learning models
8 Outperforming classical surrogates
9 Avenues for contextuality-inspired inductive bias
10 Outlook

Summary

The document discusses the importance of contextuality and inductive bias in quantum machine learning, emphasizing the need to identify data structures that can be effectively encoded into quantum learning models. It introduces a framework for studying contextuality in machine learning and explores the relationship between contextuality and the expressivity of learning models. The authors demonstrate through a toy learning problem that contextual model classes can outperform noncontextual counterparts. They also discuss the performance of quantum models in learning tasks inspired by contextuality. The document concludes with insights on the potential impact of contextuality-inspired inductive bias in quantum machine learning.
×
This is where the content will go.