4.8 Article

A critical problem in benchmarking and analysis of evolutionary computation methods

Journal

NATURE MACHINE INTELLIGENCE
Volume 4, Issue 12, Pages 1238-1245

Publisher

NATURE PORTFOLIO
DOI: 10.1038/s42256-022-00579-0

Keywords

-

Funding

  1. Grant Agency of the Czech Republic [22-31173S]
  2. Brno University of Technology [FSI-S-20-6538]

Ask authors/readers for more resources

The article highlights the critical issue in evolutionary computation where some frequently used benchmark functions have their optima in the center of the feasible set, posing challenges in algorithm analysis. Through analyzing seven recently published methods, it was found that the presence of a center-bias operator enables easy identification of optima in the center of the benchmark set, rendering comparisons with methods lacking this bias meaningless. The computational performance comparison of these methods with established ones like 'differential evolution' and 'particle swarm optimization' showed varied results with only one new method consistently outperforming the old ones.
Benchmarking is a cornerstone in the analysis and development of computational methods, especially in the field of evolutionary computation, where theoretical analysis of the algorithms is almost impossible. In this Article, we show that some of the frequently used benchmark functions have their respective optima in the centre of the feasible set and that this poses a critical problem for the analysis of evolutionary computation methods. We carry out an analysis of seven recently published methods and find that these contain a centre-bias operator that lets them find optima in the centre of the benchmark set with ease. However, this mechanism makes their comparison with other methods (that do not have a centre-bias) meaningless. We compare the computational performance of these seven new methods to two long-standing ones in evolutionary computation ('differential evolution' and 'particle swarm optimization') on shifted problems and on more advanced benchmark problems. Only one of the seven methods performed consistently better than the pair of old methods, three performed on par, two performed very badly and the worst one performed barely better than a random search. We provide several suggestions that could help to improve analysis and benchmarking in evolutionary computation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available