Journal
NATURAL COMPUTING
Volume 13, Issue 1, Pages 17-37Publisher
SPRINGER
DOI: 10.1007/s11047-013-9408-3
Keywords
Gaussian mutation; Opposition-based learning; Orthogonal learning; Particle swarm optimization; Quadratic interpolation
Categories
Funding
- National Natural Science Foundation of China [61373111, 61272279, 61003199, 61203303]
- Fundamental Research Funds for the Central Universities [K50511020014, K5051302084, K50510020011, K5051302049, K5051302023]
- Fund for Foreign Scholars in University Research and Teaching Programs (the 111 Project) [B07048]
- Program for New Century Excellent Talents in University [NCET-12-0920]
Ask authors/readers for more resources
Particle swarm optimization (PSO) is a population based algorithm for solving global optimization problems. Owing to its efficiency and simplicity, PSO has attracted many researchers' attention and developed many variants. Orthogonal learning particle swarm optimization (OLPSO) is proposed as a new variant of PSO that relies on a new learning strategy called orthogonal learning strategy. The OLPSO differs in the utilization of the information of experience from the standard PSO, in which each particle utilizes its historical best experience and globally best experience through linear summation. In OLPSO, particles can fly in better directions by constructing an efficient exemplar through orthogonal experimental design. However, the global version based orthogonal learning PSO (OLPSO-G) still have some drawbacks in solving some complex multimodal function optimization. In this paper, we proposed a quadratic interpolation based OLPSO-G (QIOLPSO-G), in which, a quadratic interpolation based construction strategy for the personal historical best experience is applied. Meanwhile, opposition-based learning, and Gaussian mutation are also introduced into this paper to increase the diversity of the population and discourage the premature convergence. Experiments are conducted on 16 benchmark problems to validate the effectiveness of the QIOLPSO-G, and comparisons are made with four typical PSO algorithms. The results show that the introduction of the three strategies does enhance the effectiveness of the algorithm.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available