4.6 Article

Solving Large-Scale Multiobjective Optimization Problems With Sparse Optimal Solutions via Unsupervised Neural Networks

期刊

IEEE TRANSACTIONS ON CYBERNETICS
卷 51, 期 6, 页码 3115-3128

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2020.2979930

关键词

Optimization; Neural networks; Evolutionary computation; Search problems; Computer science; Sociology; Statistics; Denoising autoencoder (DAE); large-scale multiobjective optimization; Pareto-optimal subspace; restricted Boltzmann machine (RBM); sparse Pareto-optimal solutions

资金

  1. Key Project of Science and Technology Innovation 2030 - Ministry of Science and Technology of China [2018AAA0100105]
  2. National Natural Science Foundation of China [61672033, 61876123, 61906001, U1804262]
  3. Hong Kong Scholars Program [XJ2019035]
  4. Anhui Provincial Natural Science Foundation [1808085J06, 1908085QF271]
  5. State Key Laboratory of Synthetical Automation for Process Industries [PAL-N201805]
  6. Research Grants Council of the Hong Kong Special Administrative Region, China [CityU11202418, CityU11209219]
  7. Royal Society International Exchanges [IEC\NSFC\170279]

向作者/读者索取更多资源

This article proposes an evolutionary algorithm to solve sparse large-scale multiobjective optimization problems by learning the Pareto-optimal subspace. The algorithm uses neural networks to learn a sparse distribution and a compact representation of decision variables, conducts genetic operators in the learned subspace, and maps the resultant offspring solutions back to the original search space. Experimental results show that the proposed algorithm can effectively solve sparse large-scale multiobjective optimization problems with a limited budget of evaluations.
Due to the curse of dimensionality of search space, it is extremely difficult for evolutionary algorithms to approximate the optimal solutions of large-scale multiobjective optimization problems (LMOPs) by using a limited budget of evaluations. If the Pareto-optimal subspace is approximated during the evolutionary process, the search space can be reduced and the difficulty encountered by evolutionary algorithms can be highly alleviated. Following the above idea, this article proposes an evolutionary algorithm to solve sparse LMOPs by learning the Pareto-optimal subspace. The proposed algorithm uses two unsupervised neural networks, a restricted Boltzmann machine, and a denoising autoencoder to learn a sparse distribution and a compact representation of the decision variables, where the combination of the learnt sparse distribution and compact representation is regarded as an approximation of the Pareto-optimal subspace. The genetic operators are conducted in the learnt subspace, and the resultant offspring solutions then can be mapped back to the original search space by the two neural networks. According to the experimental results on eight benchmark problems and eight real-world problems, the proposed algorithm can effectively solve sparse LMOPs with 10000 decision variables by only 100000 evaluations.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据