Boosting Adversarial Attacks on Neural Networks with Better Optimizer

By H. Yin et al
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Related Work
3. Methodology
4. Experiment

Summary

Convolutional neural networks have outperformed humans in image recognition tasks, but they remain vulnerable to attacks from adversarial examples. Adversarial attacks can be further improved in black-box environments by combining a modified Adam gradient descent algorithm with the iterative gradient-based attack method. The proposed Adam Iterative Fast Gradient Method enhances the transferability of adversarial examples, achieving a higher attack success rate than existing methods. The study investigates black-box attacks using adversarial data with strong transferability and demonstrates improved attack performance in black-box settings. The Adam Iterative Fast Gradient Method (AI-FGM) incorporates the Adam optimizer to improve the transferability of adversarial examples. The method was tested on multiple networks, achieving higher attack success rates on black-box models. Additionally, attacking an ensemble of networks further improved the transferability of adversarial examples. Experimental results validate the effectiveness of the proposed methodology.
×
This is where the content will go.