4.7 Article

HITS-based attentional neural model for abstractive summarization

期刊

KNOWLEDGE-BASED SYSTEMS
卷 222, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2021.106996

关键词

HITS-based attention; Abstractive summarization; Comparison mechanism

资金

  1. National Key Research and Development Project of China [2018YFB1402600]
  2. National Natural Science Foundation of China [61872296, 61772429]
  3. MOE (Ministry of Education in China) Project of Humanities and Social Sciences [18YJC870001]
  4. Fundamental Research Funds for the Central Universities [3102019DHKY04]

向作者/读者索取更多资源

This research proposes an attention mechanism based on HITS to improve summarization performance, and utilizes KL divergence and comparison mechanism. Experimental results demonstrate the method's outstanding performance in summarization.
Automatic abstractive summary generation is still an open problem in natural language processing field. Conventional encoder-decoder model based abstractive summarization methods often suffer from repetition and semantic irrelevance. Recent studies apply traditional attention or graph-based attention on the encoder-decoder model to tackle the problem, under the assumption that all the sentences in the original document are indistinguishable from each other. But in a document, the same words in different sentences are not equally important, i.e., the words in a trivial sentence are less important than the words in a salient sentence. Based on it, we develop a HITS-based attention mechanism in this paper, which fully leverages sentence-level and word-level information by considering sentences and words in the original document as authorities and hubs. Based on it, we present a novel abstractive summarization method, with Kullback-Leibler (KL) divergence to refine the attention value, meanwhile we propose a comparison mechanism in summary generation to further improve the summarization performance. When evaluated on the CNN/Daily Mail and NYT datasets, the experimental results demonstrate the improvement of summarization performance and show the performance of our proposed method is comparable with that of the other summarization methods. Besides, we also conduct experiments on CORD-19 dataset (COVID-19 Open Research Dataset) which is a biomedical domain dataset, and the experimental results show great performance of our proposed method compared with that of the other state-of-the-art summarization methods. (C) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据