4.7 Article

Knowledge guided Bayesian classification for dynamic multi-objective optimization

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 250, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2022.109173

Keywords

Dynamic multi-objective optimization; Naive Bayesian classification; Evolutionary algorithm; Cluster algorithm

Funding

  1. National Natural Science Foundation of China (NSFC) [61876110, JCYJ201908081-64211203]
  2. NSFC [U1713212]
  3. Guangdong Pearl River Talent Recruitment Program [U1713212]
  4. Shenzhen Science and Technology Innovation Commission [2019ZT08X603, R2020A045]

Ask authors/readers for more resources

This paper proposes a knowledge-guided Bayesian classification method for DMOEA, which achieves robust prediction by fully exploiting the information from all historical environments. The experimental results on multiple DMOPs test suits demonstrate that KGB-DMOEA is superior to several state-of-the-art DMOEAs.
Dynamic multi-objective optimization problems (DMOPs) typically contain multiple conflicting objectives that vary over time, requiring the optimization algorithms to quickly track the changing Pareto-optimal solutions (POS). Recently, prediction-based dynamic multi-objective evolutionary algorithms (DMOEAs) using transfer learning ideas have been considered promising, which could accelerate the search towards the POS by transferring information from previous environments to estimate the location of the POS in the next environment. However, most of the existing methods only transfer the search experiences from one or two previous environments to construct the prediction model for the next moment, which may ignore the effective information in the earlier search process thereby degrading the prediction accuracy in some cases. In this paper, a knowledge-guided Bayesian classification for DMOEA, called KGB-DMOEA, is proposed, which aims to achieve a robust prediction by fully exploiting the information from all historical environments. Particularly, when an environmental change is detected, a knowledge reconstruction-examination strategy is designed to divide all historical optimal solutions into useful and useless ones, which are used as positive and negative samples, respectively, for training the prediction model in subsequent steps. Then, a non -linear probabilistic classifier, i.e., naive Bayesian classifier, is constructed by using the above training samples, which can fully mine the effective knowledge from all historical environments and predict a high-quality initial population for the new environment. Experimental results on multiple DMOPs test suits demonstrate that KGB-DMOEA is superior to several state-of-the-art DMOEAs in solving various DMOPs. (C) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available