4.7 Article

Impact-Based Ranking of Scientific Publications: A Survey and Experimental Evaluation

Journal

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
Volume 33, Issue 4, Pages 1567-1584

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2019.2941206

Keywords

Measurement; Benchmark testing; Market research; Indexes; Information retrieval; Data mining; Bibliometrics; information retrieval; data mining

Funding

  1. project Moving from Big Data Management to Data Science - Operational Programme Competitiveness, Entrepreneurship and Innovation (NSRF 2014-2020) [MIS 5002437/3]
  2. European Union (European Regional Development Fund)

Ask authors/readers for more resources

With the increasing rate of scientific publications, there is a need to discern high-impact publications; various methods have been used to rank publications based on expected citation-based impact; the research area has not been systematically studied and evaluation methods vary.
As the rate at which scientific work is published continues to increase, so does the need to discern high-impact publications. In recent years, there have been several approaches that seek to rank publications based on their expected citation-based impact. Despite this level of attention, this research area has not been systematically studied. Past literature often fails to distinguish between short-term impact, the current popularity of an article, and long-term impact, the overall influence of an article. Moreover, the evaluation methodologies applied vary widely and are inconsistent. In this work, we aim to fill these gaps, studying impact-based ranking theoretically and experimentally. First, we provide explicit definitions for short-term and long-term impact, and introduce the associated ranking problems. Then, we identify and classify the most important ideas employed by state-of-the-art methods. After studying various evaluation methodologies of the literature, we propose a specific benchmark framework that can help us better differentiate effectiveness across impact aspects. Using this framework we investigate: (1) the practical difference between ranking by short- and long-term impact, and (2) the effectiveness and efficiency of ranking methods in different settings. To avoid reporting results that are discipline-dependent, we perform our experiments using four datasets from different scientific disciplines.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available