Summary
Deep neural networks are widely used in various tasks, but are vulnerable to adversarial attacks. These attacks can lead to misclassification and often exhibit transferability between models. They can be white-box or black-box attacks, with different purposes like targeted or non-targeted. Adversarial perturbations are generated using optimization-based or generative methods. Different tasks like image classification, object detection, and semantic segmentation are vulnerable to these attacks. Various white-box attack algorithms exist, including L-BFGS and FGSM. Black-box attacks involve training an alternative model to generate adversarial samples. Different threat models like query-limited or label-only settings are proposed for real-world systems.