A Review of Adversarial Attacks in Computer Vision

By Zhang Y. et al
Published on Aug. 15, 2023
Read the original document by opening this link in a new tab.

Table of Contents

1 Introduction
2 Adversarial Attack Methods
2.1 White-box attacks
2.1.1 Optimization-based white-box attacks
2.1.2 Gradient-based white-box attacks
2.2 Black-box attacks
2.2.1 Nicolas Papernot’s attack
2.2.2 NES
3 Adversarial Attack Methods Cont.
2.1.3 Other white-box attack methods
4 Conclusion

Summary

Deep neural networks are widely used in various tasks, but are vulnerable to adversarial attacks. These attacks can lead to misclassification and often exhibit transferability between models. They can be white-box or black-box attacks, with different purposes like targeted or non-targeted. Adversarial perturbations are generated using optimization-based or generative methods. Different tasks like image classification, object detection, and semantic segmentation are vulnerable to these attacks. Various white-box attack algorithms exist, including L-BFGS and FGSM. Black-box attacks involve training an alternative model to generate adversarial samples. Different threat models like query-limited or label-only settings are proposed for real-world systems.
×
This is where the content will go.