4.7 Article

Self-Adapting Particle Swarm Optimization for continuous black box optimization

Journal

APPLIED SOFT COMPUTING
Volume 131, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.asoc.2022.109722

Keywords

Meta -heuristics; Global optimization; Hyper-heuristics

Ask authors/readers for more resources

This paper introduces a new version of a hyper-heuristic framework called Generalized Self-Adapting Particle Swarm Optimization with samples archive (M-GAPSO). The framework hybridizes Particle Swarm Optimization, Differential Evolution, and model based optimizers and regulates the ratio of different algorithms within the population using an adaptation scheme. Experimental results on various benchmark functions demonstrate that M-GAPSO outperforms other optimization methods, including the basic DE algorithm.
This paper introduces a new version of a hyper-heuristic framework: Generalized Self-Adapting Particle Swarm Optimization with samples archive (M-GAPSO). This framework is based on the authors previous works on hybridization of optimization algorithms and enhancing population based optimization with model based optimization. The paper presents the structure of the proposed framework and analyzes the impact of its modules on the final system performance. M-GAPSO hybridizes Particle Swarm Optimization, Differential Evolution and model based optimizers. A ratio of particular algorithms within a population is regulated by an adaptation scheme. The applicability of the proposed hybrid method to black-box optimization is verified on 24 continuous benchmark functions from the COCO test set and 29 functions from the CEC-2017 test set. On the BBOB test set a hybrid of PSO and DE with adaptation obtained 11 significantly better and 2 significantly worse results on 5 and 20 dimensional functions than the basic DE. Further inclusion of the model based optimizers led to 15 significantly better and 2 significantly worse results compared to the PSO-DE hybrid. On the CEC-2017 test set, M-GAPSO was significantly better than both Red Fox Optimization and Dual Opposition-Based Learning for Differential Evolution (DOBL) on 7 functions in 30 dimensions and 12 functions in 50 dimensions.(c) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available