4.5 Article

Generating Relevant and Informative Questions for Open-Domain Conversations

期刊

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3510612

关键词

Conversational search; neural question generation; open-domain conversations; context modeling

向作者/读者索取更多资源

Recent research emphasizes the importance of mixed-initiative interactions in conversational search. The task of question generation (QG) in open-domain conversational systems aims to enhance human-machine interactions. However, the limited availability of QG-specific data in conversations makes this task challenging. In this study, we propose a context-enhanced neural question generation (CNQG) model that leverages conversational context to predict question content and pattern. We also use multi-task learning with auxiliary training objectives and a self-supervised approach to train our question generator.
Recent research has highlighted the importance of mixed-initiative interactions in conversational search. To enable mixed-initiative interactions, information retrieval systems should be able to ask diverse questions, such as information-seeking, clarification, and open-ended ones. Question generation (QG) of open-domain conversational systems aims at enhancing the interactiveness and persistence of human-machine interactions. The task is challenging because of the sparsity of QG-specific data in conversations. Current work is limited to single-turn interaction scenarios. We propose a context-enhanced neural question generation (CNQG) model that leverages the conversational context to predict question content and pattern, then perform question decoding. A hierarchical encoder framework is employed to obtain the discourse-level context representation. Based on this, we propose Review and Transit mechanisms to respectively select contextual keywords and predict new topic words to further construct the question content. Conversational context and the predicted question content are used to produce the question pattern, which in turn guides the question decoding process implemented by a recurrent decoder with a joint attention mechanism. To fully utilize the limited QG-specific data to train our question generator, we perform multi-task learning with three auxiliary training objectives, i.e., question pattern prediction, Review, and Transit mechanisms. The required additional labeled data is obtained in a self-supervised way. We also design a weight decaying strategy to adjust the influences of various auxiliary learning tasks. To the best of our acknowledge, we are the first to extend the application of QG to the multi-turn open-domain conversational scenario. Extensive experimental results demonstrate the effectiveness of our proposal and its main components on generating relevant and informative questions, with robust performance for contexts with various lengths.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据