4.6 Article

Political optimizer with interpolation strategy for global optimization

期刊

PLOS ONE
卷 16, 期 5, 页码 -

出版社

PUBLIC LIBRARY SCIENCE
DOI: 10.1371/journal.pone.0251204

关键词

-

资金

  1. National Natural Science Foundation of China [61861012]
  2. Guangxi Natural Science Foundation Joint Funding Project [2018GXNSFAA138115]
  3. Science Foundation of Guilin University of Aerospace Technology [XJ20KT09]
  4. Guangxi Key Laboratory of Automatic Detecting Technology and Instruments [YQ21106]

向作者/读者索取更多资源

The political optimizer (PO) is a cutting-edge meta-heuristic optimization technique that simulates the multi-stage process of politics in human society, but suffers from stagnation in local optima due to certain drawbacks. Novel PO variants have been proposed by integrating interpolation strategies and Refraction Learning (RL) to enhance exploration capacity and balance between global exploration and local exploitation, leading to superior performance in terms of global optimization problems.
Political optimizer (PO) is a relatively state-of-the-art meta-heuristic optimization technique for global optimization problems, as well as real-world engineering optimization, which mimics the multi-staged process of politics in human society. However, due to a greedy strategy during the election phase, and an inappropriate balance of global exploration and local exploitation during the party switching stage, it suffers from stagnation in local optima with a low convergence accuracy. To overcome such drawbacks, a sequence of novel PO variants were proposed by integrating PO with Quadratic Interpolation, Advance Quadratic Interpolation, Cubic Interpolation, Lagrange Interpolation, Newton Interpolation, and Refraction Learning (RL). The main contributions of this work are listed as follows. (1) The interpolation strategy was adopted to help the current global optima jump out of local optima. (2) Specifically, RL was integrated into PO to improve the diversity of the population. (3) To improve the ability of balancing exploration and exploitation during the party switching stage, a logistic model was proposed to maintain a good balance. To the best of our knowledge, PO combined with the interpolation strategy and RL was proposed here for the first time. The performance of the best PO variant was evaluated by 19 widely used benchmark functions and 30 test functions from the IEEE CEC 2014. Experimental results revealed the superior performance of the proposed algorithm in terms of exploration capacity.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据