4.8 Article

DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2022.3231891

关键词

Measurement; Benchmark testing; Recommender systems; Reproducibility of results; Libraries; Biological system modeling; Predictive models; Benchmarks; fair comparison; recommender systems; reproducible evaluation; standardized procedures

向作者/读者索取更多资源

Recently, the lack of effective benchmarks for rigorous evaluation in recommender systems has become a critical issue, resulting in unreproducible evaluation and unfair comparison. To address this, we conducted both theoretical and experimental studies to benchmark recommendation for rigorous evaluation. The theoretical study systematically analyzed hyper-factors affecting recommendation performance throughout the evaluation chain, while the experimental study released DaisyRec 2.0 library to integrate these hyper-factors and conducted empirical research on their impacts. With the support of both studies, we proposed standardized procedures and provided performance benchmarks for later study.
Recently, one critical issue looms large in the field of recommender systems - there are no effective benchmarks for rigorous evaluation - which consequently leads to unreproducible evaluation and unfair comparison. We, therefore, conduct studies from the perspectives of practical theory and experiments, aiming at benchmarking recommendation for rigorous evaluation. Regarding the theoretical study, a series of hyper-factors affecting recommendation performance throughout the whole evaluation chain are systematically summarized and analyzed via an exhaustive review on 141 papers published at eight top-tier conferences within 2017-2020. We then classify them into model-independent and model-dependent hyper-factors, and different modes of rigorous evaluation are defined and discussed in-depth accordingly. For the experimental study, we release DaisyRec 2.0 library by integrating these hyper-factors to perform rigorous evaluation, whereby a holistic empirical study is conducted to unveil the impacts of different hyper-factors on recommendation performance. Supported by the theoretical and experimental studies, we finally create benchmarks for rigorous evaluation by proposing standardized procedures and providing performance of ten state-of-the-arts across six evaluation metrics on six datasets as a reference for later study. Overall, our work sheds light on the issues in recommendation evaluation, provides potential solutions for rigorous evaluation, and lays foundation for further investigation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据