4.6 Article

Text information aggregation with centrality attention

Journal

SCIENCE CHINA-INFORMATION SCIENCES
Volume 64, Issue 12, Pages -

Publisher

SCIENCE PRESS
DOI: 10.1007/s11432-019-1519-6

Keywords

information aggregation; eigen centrality; text classification; natural language processing; deep learning

Funding

  1. National Natural Science Foundation of China [61751201, 61672162]
  2. Shanghai Municipal Science and Technology Major Project [2018SHZDZX01]

Ask authors/readers for more resources

In this study, a new self-attention mechanism called eigen-centrality self-attention is proposed to incorporate higher-order relationships among words in text sequence encoding, leading to better results in multiple tasks compared to baseline models. The power method algorithm is adopted to compute the dominant eigenvector of the graph, and an iterative approach is derived to reduce memory consumption and computation requirement during the process.
A lot of natural language processing problems need to encode the text sequence as a fix-length vector, which usually involves an aggregation process of combining the representations of all the words, such as pooling or self-attention. However, these widely used aggregation approaches do not take higher-order relationships among the words into consideration. Hence we propose a new way of obtaining aggregation weights, called eigen-centrality self-attention. More specifically, we build a fully-connected graph for all the words in a sentence, then compute the eigen-centrality as the attention score of each word. The explicit modeling of relationships as a graph is able to capture some higher-order dependency among words, which helps us achieve better results in 5 text classification tasks and one SNLI task than baseline models such as pooling, self-attention, and dynamic routing. Besides, in order to compute the dominant eigenvector of the graph, we adopt a power method algorithm to get the eigen-centrality measure. Moreover, we also derive an iterative approach to get the gradient for the power method process to reduce both memory consumption and computation requirement.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available