4.5 Review

The (black) art of runtime evaluation: Are we comparing algorithms or implementations?

期刊

KNOWLEDGE AND INFORMATION SYSTEMS
卷 52, 期 2, 页码 341-378

出版社

SPRINGER LONDON LTD
DOI: 10.1007/s10115-016-1004-2

关键词

Methodology; Efficiency evaluation; Runtime experiments; Implementation matters

向作者/读者索取更多资源

Any paper proposing a new algorithm should come with an evaluation of efficiency and scalability (particularly when we are designing methods for big data). However, there are several (more or less serious) pitfalls in such evaluations. We would like to point the attention of the community to these pitfalls. We substantiate our points with extensive experiments, using clustering and outlier detection methods with and without index acceleration. We discuss what we can learn from evaluations, whether experiments are properly designed, and what kind of conclusions we should avoid. We close with some general recommendations but maintain that the design of fair and conclusive experiments will always remain a challenge for researchers and an integral part of the scientific endeavor.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据