4.7 Article

Evolutionary Multitasking for Large-Scale Multiobjective Optimization

Journal

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
Volume 27, Issue 4, Pages 863-877

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TEVC.2022.3166482

Keywords

Evolutionary algorithm (EA); large-scale multiobjective optimization; multitasking; transfer learning

Ask authors/readers for more resources

Evolutionary transfer optimization (ETO) is a hot topic in evolutionary computation, which seeks to improve optimization efficiency by transferring knowledge across related exercises. This article proposes a multitasking ETO algorithm using transfer learning to solve large-scale multiobjective optimization problems (LMOPs). The algorithm utilizes a discriminative reconstruction network (DRN) for each LMOP to transfer solutions, evaluate correlation, and learn a reduced Pareto-optimal subspace of the target LMOP. The effectiveness of the algorithm is validated in real-world and synthetic problem suites.
Evolutionary transfer optimization (ETO) has been becoming a hot research topic in the field of evolutionary computation, which is based on the fact that knowledge learning and transfer across the related optimization exercises can improve the efficiency of others. However, rare studies employ ETO to solve large-scale multiobjective optimization problems (LMOPs). To fill this research gap, this article proposes a new multitasking ETO algorithm via a powerful transfer learning model to simultaneously solve multiple LMOPs. In particular, inspired by adversarial domain adaptation in transfer learning, a discriminative reconstruction network (DRN) model (containing an encoder, a decoder, and a classifier) is created for each LMOP. At each generation, the DRN is trained by the currently obtained nondominated solutions for all LMOPs via backpropagation with gradient descent. With this well-trained DRN model, the proposed algorithm can transfer the solutions of source LMOPs directly to the target LMOP for assisting its optimization, can evaluate the correlation between the source and target LMOPs to control the transfer of solutions, and can learn a dimensional-reduced Pareto-optimal subspace of the target LMOP to improve the efficiency of transfer optimization in the large-scale search space. Moreover, we propose a real-world multitasking LMOP suite to simulate the training of deep neural networks (DNNs) on multiple different classification tasks. Finally, the effectiveness of the proposed algorithm has been validated in this real-world problem suite and the other two synthetic problem suites.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available