4.7 Article

An empirical study of the design choices for local citation recommendation systems

Journal

EXPERT SYSTEMS WITH APPLICATIONS
Volume 200, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2022.116852

Keywords

Natural language processing; Citation recommendation; Information retrieval; BM25; SPECTER; Negative sampling

Funding

  1. Croatian Science Foundation [HRZZ-DOK-2018-09]

Ask authors/readers for more resources

The study demonstrates the impact of three important design choices in building local citation recommendation systems: parameters of prefiltering models, training regime, and negative sampling strategy. Optimizing these choices can significantly improve the system's performance.
As the number of published research articles grows on a daily basis, it is becoming increasingly difficult for scientists to keep up with the published work. Local citation recommendation (LCR) systems, which produce a list of relevant articles to be cited in a given text passage, could help alleviate the burden on scientists and facilitate research. While research on LCR is gaining popularity, building such systems involves a number of important design choices that are often overlooked. We present an empirical study of the impact of the three design choices in two-stage LCR systems consisting of a prefiltering and a reranking phase. In particular, we investigate (1) the impact of the prefiltering models' parameters on the model's performance, as well as the impact of (2) the training regime and (3) negative sampling strategy on the performance of the reranking model. We evaluate various combinations of these parameters on two datasets commonly used for LCR and demonstrate that specific combinations improve the model's performance over the widely used standard approaches. Specifically, we demonstrate that (1) optimizing prefiltering models' parameters improves R@1000 in the range of 3% to 12% in absolute value, (2) using the strict training regime improves both R@10 and MRR (up to a maximum of 3.4% and 2.6%, respectively) in all combinations of dataset and prefiltering model, and (3) a careful choice of negative examples can further improve both R@10 and MRR (up to a maximum of 11.9% and 8%, respectively) in both datasets used Our results show that the design choices we considered are important and should be given greater consideration when building LCR systems.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available