4.7 Article

Probabilistic Keyphrase Generation From Copy and Generating Spaces

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2023.3290789

关键词

~Gaussian mixture model (GMM); keyphrase generation; variational encoder-decoder (VED); von Mises-Fisher (vMF) distribution

向作者/读者索取更多资源

Keyphrase generation is a fundamental task in NLP, and existing methods mainly optimize the distribution to generate keyphrases. However, they neglect the manipulation of copy and generating spaces, reducing the decoder's generability. This article proposes a probabilistic keyphrase generation model that leverages a variational encoder-decoder framework and separate latent variables for the copy and generating spaces. Experimental results on social media and scientific article datasets demonstrate the superiority of the proposed model in generating accurate predictions and controllable keyphrase numbers.
Keyphrase generation is one of the most fundamental tasks in natural language processing (NLP). Most existing works on keyphrase generation mainly focus on using holistic distribution to optimize the negative log-likelihood loss, but they do not directly manipulate the copy and generating spaces, which may reduce the generability of the decoder. Additionally, existing keyphrase models are either unable to determine the dynamic numbers of keyphrases or produce the number of keyphrases implicitly. In this article, we propose a probabilistic keyphrase generation model from copy and generating spaces. The proposed model is built upon the vanilla variational encoder-decoder (VED) framework. On top of VED, two separate latent variables are adopted to model the distribution of data within the latent copy and generating spaces, respectively. Specifically, we adopt a von Mises-Fisher (vMF) distribution to obtain a condensed variable for modifying the generating probability distribution over the predefined vocabulary. Meanwhile, we utilize a clustering module, which is designed to promote Gaussian Mixture learning and subsequently extract a latent variable for the copy probability distribution. Moreover, we utilize a natural property of the Gaussian mixture network and use the number of filtered components to determine the number of keyphrases. The approach is trained based on latent variable probabilistic modeling, neural variational inference, and self-supervised learning. Experiments on social media and scientific article datasets outperform the state-of-the-art baselines in generating accurate predictions and controllable keyphrase numbers.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据