4.7 Article

A neuro-diversified benchmark generator for black box optimization

期刊

INFORMATION SCIENCES
卷 573, 期 -, 页码 475-492

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2021.04.075

关键词

Diversified landscape generator; Benchmark problems; Black box optimization; Evolutionary algorithms; Recurrent neural network; Activation function

资金

  1. National Natural Science Foundation of China [61872419, 61573166, 61572230]
  2. Shandong Provincial Key RD Program [2019GGX101041, 2018CXGC0706]
  3. Taishan Scholars Program of Shandong Province, China [tsqn201812077]

向作者/读者索取更多资源

The study introduces a novel framework for comprehensively assessing evolutionary algorithms in a black box scenario by randomly generating diversified benchmark functions. Experimental results demonstrate significant differences in performance among optimizers tested on the proposed problems compared to well-known BBOB and CEC problems, emphasizing the necessity of the proposed benchmarks for facilitating a more comprehensive evaluation.
No Free Lunch Theorem presents a dilemma in the evaluation of emerging evolutionary algorithms in terms of handling various real world problems and their unknown internal structures, since the performances of these algorithms are related to the corresponding benchmarks. Although white and black box schemes have made impressive progress in overcoming this dilemma, such as clear property definition and basis function composition, the evaluation of algorithms on sophisticated suites remains insufficient on account of the limited quantity and diversity of such benchmarks, which can induce bias in a narrow problem domain. Therefore, this study proposes a novel framework for randomly generating diversified benchmark functions to comprehensively evaluate evolutionary algorithms in a black box scenario. The proposed approach adopts a recurrent neural network with various activation functions to produce test problems with important characteristics such as ruggedness and multi-funnels. In addition, the proposed framework can generate virtually limitless chaotic benchmarks by using random weights. The experimental results demonstrate a distinct difference among the performance of the tested optimizers on the proposed problems and the well-known BBOB and CEC problems, which implies the necessity of the proposed benchmarks when facilitating a more comprehensive evaluation. (c) 2021 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据