4.7 Article

Analyzing the BBOB Results by Means of Benchmarking Concepts

期刊

EVOLUTIONARY COMPUTATION
卷 23, 期 1, 页码 161-185

出版社

MIT PRESS
DOI: 10.1162/EVCO_a_00134

关键词

Evolutionary optimization; benchmarking; exploratory landscape analysis; BBOB test set; multidimensional scaling; consensus ranking

资金

  1. Collaborative Research Center [SFB 823]
  2. Graduate School of Energy Efficient Production and Logistics
  3. Research Training Group Statistical Modelling of the German Research Foundation

向作者/读者索取更多资源

We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the best one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据