4.7 Article

Heterogeneous comprehensive learning and dynamic multi- swarm particle swarm optimizer with two mutation operators

Journal

INFORMATION SCIENCES
Volume 540, Issue -, Pages 175-201

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2020.06.027

Keywords

Particle swarm optimization (PSO); Comprehensive learning (CL); Dynamic multi-swarm (DMS); Mutation operator; Exploitation; Exploration

Funding

  1. National Key Research Program of China Collaborative Precision Positioning ProjectGrant [2016YFB0501900]
  2. National Natural Science Foundation of China [41774017, 41621091]

Ask authors/readers for more resources

In this paper, a heterogeneous comprehensive learning and dynamic multi-swarm particle swarm optimizer with two mutation operators (HCLDMS-PSO) is presented. In addition, a comprehensive learning (CL) strategy with the global optimal experience of the whole population is conducted to generate an exploitation subpopulation exemplar. However, a modified dynamic multi-swarm (DMS) strategy is specially designed to construct the exploration subpopulation exemplar. In the canonical DMS strategy, it is unfavorable for different sub-swarms to use the same linear decreasing inertia weight parameter. We first propose classifying the DMS sub-swarms at the search level and then constructing a novel nonlinear adaptive decreasing inertia weight for different sub-swarms, introducing a non uniform mutation operator to enhance its exploration capability. Finally, the gbest of the whole population also adopts a Gaussian mutation operator to avoid falling into the local optimum. The particles of the two subpopulations will update their velocity independently without crippling one another to prevent a loss of diversity. The performance of HCLDMSPSO is compared with those of 8 other PSO variants and 11 evolutionary algorithms on two classical benchmark optimization problems and a real-world engineering problem. Experimental results demonstrate that the HCLDMS-PSO improves the convergence speed, accuracy, and reliability on most optimization problems. (c) 2020 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available