4.7 Article

A Classifier-Assisted Level-Based Learning Swarm Optimizer for Expensive Optimization

Journal

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
Volume 25, Issue 2, Pages 219-233

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TEVC.2020.3017865

Keywords

Expensive optimization; gradient boosting classifier (GBC); large-scale optimization; level-based learning swarm optimizer (LLSO); surrogate-assisted evolutionary algorithm (SAEA)

Funding

  1. Ministry of Science and Technology of China [2018AAA0101300]
  2. National Natural Science Foundation of China [61976093, 61876111]
  3. Natural Science Foundation of Jiangsu [SBK2020040136]
  4. Brain Pool program - Ministry of Science and ICT through the National Research Foundation of Korea [NRF-2019H1D3A2A01101977]
  5. Key Project of Science and Technology Innovation 2030
  6. National Research Foundation of Korea [2019H1D3A2A01101977] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

A new classifier-assisted level-based learning swarm optimizer is proposed in this study, which combines the level-based learning strategy and gradient boosting classifier to enhance the robustness and scalability of SAEAs. Experimental results demonstrate that the proposed optimizer outperforms three state-of-the-art SAEAs with a small training dataset.
Surrogate-assisted evolutionary algorithms (SAEAs) have become one popular method to solve complex and computationally expensive optimization problems. However, most existing SAEAs suffer from performance degradation with the dimensionality increasing. To solve this issue, this article proposes a classifier-assisted level-based learning swarm optimizer on the basis of the level-based learning swarm optimizer (LLSO) and the gradient boosting classifier (GBC) to improve the robustness and scalability of SAEAs. Particularly, the level-based learning strategy in LLSO has a tight correspondence with the classification characteristic by setting the number of levels in LLSO to be the same as the number of classes in GBC. Together, the classification results feedback the distribution of promising candidates to accelerate the evolution of the optimizer, while the evolved population helps to improve the accuracy of the classifier. To select informative and valuable candidates for real evaluations, we devise an L1-exploitation strategy to extensively exploit promising areas. Then, the candidate selection is conducted between the predicted L1 offspring and the already real-evaluated L1 individuals based on their Euclidean distances. Extensive experiments on commonly used benchmark functions demonstrate that the proposed optimizer can achieve competitive or better performance with a very small training dataset compared with three state-of-the-art SAEAs.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available