4.5 Article

A Learning-Based Particle Swarm Optimizer for Solving Mathematical Combinatorial Problems

期刊

AXIOMS
卷 12, 期 7, 页码 -

出版社

MDPI
DOI: 10.3390/axioms12070643

关键词

reinforcement learning; learning-based hybridizations; particle swarm optimization; mathematical combinatorial problem

向作者/读者索取更多资源

This paper presents adaptive parameter control methods utilizing reinforcement learning in the particle swarm algorithm. The integration of Q-learning into the optimization algorithm allows for parameter adjustments during the run, enabling the algorithm to dynamically learn and adapt to the problem and its context. Experimental evaluation using instances of the NP-hard multidimensional knapsack problem demonstrates significant improvement in solution quality compared to the native version of the algorithm.
This paper presents a set of adaptive parameter control methods through reinforcement learning for the particle swarm algorithm. The aim is to adjust the algorithm's parameters during the run, to provide the metaheuristics with the ability to learn and adapt dynamically to the problem and its context. The proposal integrates Q-Learning into the optimization algorithm for parameter control. The applied strategies include a shared Q-table, separate tables per parameter, and flexible state representation. The study was evaluated through various instances of the multidimensional knapsack problem belonging to the NP-hard class. It can be formulated as a mathematical combinatorial problem involving a set of items with multiple attributes or dimensions, aiming to maximize the total value or utility while respecting constraints on the total capacity or available resources. Experimental and statistical tests were carried out to compare the results obtained by each of these hybridizations, concluding that they can significantly improve the quality of the solutions found compared to the native version of the algorithm.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据