4.7 Article

Deep reinforcement learning assisted co-evolutionary differential evolution for constrained optimization

期刊

SWARM AND EVOLUTIONARY COMPUTATION
卷 83, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.swevo.2023.101387

关键词

Constraint handling technique; Deep reinforcement learning; Differential evolution; Co-evolution; Evolutionary operator

向作者/读者索取更多资源

This paper proposes a DRL-assisted co-evolutionary differential evolution algorithm (CEDE-DRL) that effectively utilizes DRL to help EAs solve constrained optimization problems. The method incorporates co-evolution into the extraction of training data, improves the accuracy of the neural network model through information exchange between multiple populations, and uses multiple constraint handling techniques for offspring selection. DRL is used to evaluate the population state, taking into account feasibility, convergence, and diversity. Additionally, an adaptive operator selection and individual archive elimination mechanism is added to avoid premature convergence and local optima.
Solving constrained optimization problems (COPs) with evolutionary algorithms (EAs) is a popular research direction due to its potential and diverse applications. One of the key issues in solving COPs is the choice of constraint handling techniques (CHTs), as different CHTs can lead to different evolutionary directions. Combining EAs with deep reinforcement learning (DRL) is a promising and emerging approach for solving COPs. Although DRL can help solve the problem of pre-setting operators in EAs, neural networks need to obtain diverse training data within a limited number of evaluations in EAs. Based on the above considerations, this work proposes a DRL assisted co-evolutionary differential evolution, named CEDE-DRL, which can effectively use DRL to help EAs solve COPs. (1) This method incorporates co-evolution into the extraction of training data for the first time, ensuring the diversity of samples and improving the accuracy of the neural network model through information exchange between multiple populations. (2) Multiple CHTs are used for offspring selection to ensure the algorithm's generality and flexibility. (3) DRL is used to evaluate the population state, taking into account feasibility, convergence, and diversity in the state setting and using the overall degree of improvement as a reward. The neural network selects suitable parent populations and corresponding archives for mutation. Finally, (4) to avoid premature convergence and local optima, an adaptive operator selection and individual archive elimination mechanism is added. Comparisons with state-of-the-art algorithms on benchmark functions CEC2010 and CEC2017 show that the proposed method performs competitively and produced robust solutions. The results of the application test set CEC2020 show that the proposed algorithm is also effective in real-world problems.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据