4.7 Article

Multi-perspective respondent representations for answer ranking in community question answering

Journal

INFORMATION SCIENCES
Volume 624, Issue -, Pages 37-48

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2022.12.040

Keywords

Community question answering; Answer ranking; Network embedding; Pre -trained language model

Ask authors/readers for more resources

Answer ranking is crucial in CQA systems, and existing methods mainly learn respondents' expertise from history answers, ignoring structure correlations between question raisers and respondents. To address this, the proposed MPRR network employs a HIN to preserve structure correlations and uses a pre-trained language model to learn respondents' expertise more efficiently. The model outperforms all baseline models across three real-world datasets in terms of ranking metrics.
Answer ranking is an important task in community question answering (CQA) systems. It aims at ranking useful answers above useless answers. Existing works learn respondents' expertise to help estimate qualities of answers. However, in most of these works, the expertise is only learned from the history answers. As a result, structure correlations between question raisers and respondents are usually ignored. Besides, these works lack an efficient way to learn respondent expertise from extensive history answers. To address the limitations, we propose a novel multi-perspective respondent representation learning (MPRR) network. First, our model learns embeddings of raisers and respondents through a heterogeneous information network(HIN) constructed by the answering records in CQA websites. The structure correlations between raisers and respondents are preserved in the learned embeddings. Second, a freezed pre-trained language model is used to learn respondents' expertise from history answer contents more quickly. Then the multi -perspective respondent representations are generated based on their expertise and the embeddings learned in the HIN. At last, the raisers, respondents, questions, and answers are all considered to compute the matching scores. We evaluate our model on three real-world CQA dataset. Experiment results show that MPRR outperforms all baseline mod-els with three ranking metrics on all datasets.(c) 2022 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available