4.7 Article

BERTERS: Multimodal representation learning for expert recommendation system with transformers and graph embeddings

期刊

CHAOS SOLITONS & FRACTALS
卷 151, 期 -, 页码 -

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.chaos.2021.111260

关键词

Multimodal representation learning; Expert recommendation system; Transformer; Graph embedding

向作者/读者索取更多资源

The study introduces a recommendation system approach called BERTERS that incorporates expert candidate scores such as knowledge level, reputation, and influence into a single vector representation using BERT and graph embedding techniques. This approach significantly improves recommendation accuracy and performance across various tasks and classifiers, demonstrating its potential in diverse domains.
An expert recommendation system suggests relevant experts of a particular topic based on three differ-ent scores authority, text similarity, and reputation. Most of the previous studies individually compute these scores and join them with a linear combination strategy. While, in this paper, we introduce a transfer learning-based and multimodal approach, called BERTERS, that presents each expert candidate by a single vector representation that includes these scores in itself. BERTERS determines a representa-tion for each candidate that presents the candidate's level of knowledge, popularity and influence, and history. BERTERS directly uses both transformers and the graph embedding techniques to convert the con -tent published by candidates and collaborative relationships between them into low-dimensional vectors which show the candidates' text similarity and authority scores. Also, to enhance the accuracy of rec-ommendation, BERTERS takes into account additional features as reputation score. We conduct extensive experiments over the multi-label classification, recommendation, and visualization tasks. Also, we assess its performance on four different classifiers, diverse train ratios, and various embedding sizes. In the clas-sification task, BERTERS strengthens the performance on Micro-F1 and Macro-F1 metrics by 23 . 40% and 34 . 45% compared with single-modality based methods. Furthermore, BERTERS achieves a gain of 9 . 12% in comparison with the baselines. Also, the results prove the capability of BERTERS to extend into a variety of domains such as academic and CQA to find experts. Since our proposed expert embeddings contain rich semantic and syntactic information of the candidate, BERTERS resulted in significantly improved per-formance over the baselines in all tasks. (c) 2021 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据