The Mathematics of Adversarial Attacks in AI — Why Deep Learning Is Unstable Despite the Existence of Stable Neural Networks

By A. Bastounis et al
Published on Sept. 13, 2021
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Main theorems – Methodological barriers, Smale’s 18th problem and the limits of AI
2.1. Interpreting Theorem 2.2
3. Main Results I – Trained NNs become unstable despite the existence of stable and accurate NNs
3.1. Interpreting Theorem 2.2
4. Connection to previous work
5. Proofs of the main results
References

Summary

The paper discusses the instability problem in deep learning despite the presence of stable and accurate neural networks. It addresses the mathematical paradox of training procedures based on fixed architecture resulting in unstable neural networks. The authors prove the existence of accurate and stable neural networks, but current algorithms do not compute them. The implications of the methodological barriers in current AI techniques are highlighted, questioning the limits of AI and the need for alternative methodologies to address instability issues.
×
This is where the content will go.