4.7 Article

Improving the Reliability of Deep Neural Networks in NLP: A Review

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 191, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2019.105210

Keywords

Adversarial examples; Adversarial texts; Natural language processing

Ask authors/readers for more resources

Deep learning models have achieved great success in solving a variety of natural language processing (NLP) problems. An ever-growing body of research, however, illustrates the vulnerability of deep neural networks (DNNs) to adversarial examples - inputs modified by introducing small perturbations to deliberately fool a target model into outputting incorrect results. The vulnerability to adversarial examples has become one of the main hurdles precluding neural network deployment into safety-critical environments. This paper discusses the contemporary usage of adversarial examples to foil DNNs and presents a comprehensive review of their use to improve the robustness of DNNs in NLP applications. In this paper, we summarize recent approaches for generating adversarial texts and propose a taxonomy to categorize them. We further review various types of defensive strategies against adversarial examples, explore their main challenges, and highlight some future research directions. (C) 2019 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available