Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity

By Bernardo Aquino et al
Published on Feb. 14, 2022
Read the original document by opening this link in a new tab.

Table of Contents

I. INTRODUCTION
II. BACKGROUND
III. ENFORCING INCREMENTAL SECTOR BOUNDEDNESS
IV. TRAINING ROBUST NEURAL NETWORKS

Summary

This paper addresses the issue of robustness against adversarial attacks in neural networks using an incremental dissipativity-based approach. It introduces a robustness certificate for neural networks in the form of a Linear Matrix Inequality (LMI) and proposes conditions for sector boundedness of neural network layers. The work highlights the effectiveness of spectral norm regularization for improving generalizability and stability in neural networks. Theoretical developments provide insights into robustness guarantees during training and offer a scalable approach for deep neural structures.
×
This is where the content will go.