Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey

By Wei Emma Zhang et al
Published on April 10, 2019
Read the original document by opening this link in a new tab.

Table of Contents

1. INTRODUCTION
2. OVERVIEW OF ADVERSARIAL ATTACKS AND DEEP LEARNING TECHNIQUES IN NATURAL LANGUAGE PROCESSING
2.1 Adversarial Attacks on Deep Learning Models: The General Taxonomy
2.1.1 Definitions
2.1.2 Threat Model
2.1.3 Measurement
2.2 Deep Learning in NLP
3. CCS Concepts
4. Additional Key Words and Phrases
5. Papers Selection
6. Contributions of this Survey
7. Conclusion

Summary

This document is a survey on adversarial attacks on deep learning models in natural language processing. It discusses the vulnerabilities of deep neural networks to adversarial examples, which can lead to false predictions. The document reviews research efforts on generating textual adversarial examples and addresses the differences and challenges in attacking textual data compared to images. It provides a comprehensive overview of related works, discusses defense strategies, and identifies open issues in the field. The survey aims to assist researchers and practitioners interested in attacking textual deep neural models and serves as a reference for applying deep learning in the NLP community.
×
This is where the content will go.