4.7 Article

Attention in Natural Language Processing

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2020.3019893

Keywords

Task analysis; Computer architecture; Visualization; Neural networks; Natural language processing; Taxonomy; Computational modeling; Natural language processing (NLP); neural attention; neural networks; review; survey

Funding

  1. Horizon 2020, project AI4EU [825619]

Ask authors/readers for more resources

This article presents a unified model for attention architectures in natural language processing, with a taxonomy based on four dimensions. It demonstrates how prior information can be exploited in attention models, and discusses ongoing research efforts and challenges in this domain.
Attention is an increasingly popular mechanism used in a wide range of neural architectures. The mechanism itself has been realized in a variety of formats. However, because of the fast-paced advances in this domain, a systematic overview of attention is still missing. In this article, we define a unified model for attention architectures in natural language processing, with a focus on those designed to work with vector representations of the textual data. We propose a taxonomy of attention models according to four dimensions: the representation of the input, the compatibility function, the distribution function, and the multiplicity of the input and/or output. We present the examples of how prior information can be exploited in attention models and discuss ongoing research efforts and open challenges in the area, providing the first extensive categorization of the vast body of literature in this exciting domain.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available