4.7 Article

Multiobjective Evolutionary Algorithms Are Still Good: Maximizing Monotone Approximately Submodular Minus Modular Functions

Journal

EVOLUTIONARY COMPUTATION
Volume 29, Issue 4, Pages 463-490

Publisher

MIT PRESS
DOI: 10.1162/evco_a_00288

Keywords

Submodular optimization; multiobjective evolutionary algorithms; running time analysis; computational complexity; empirical study

Funding

  1. NSFC [62022039]
  2. Jiangsu NSF [BK20201247]

Ask authors/readers for more resources

This article studies the problem class of maximizing monotone approximately submodular minus modular functions and proves that the GSEMO algorithm can achieve the best-known polynomial-time approximation guarantee by simultaneously optimizing a distorted objective function and the size. Empirical studies show the excellent performance of the GSEMO in applications like Bayesian experimental design and directed vertex cover.
As evolutionary algorithms (EAs) are general-purpose optimization algorithms, recent theoretical studies have tried to analyze their performance for solving general problem classes, with the goal of providing a general theoretical explanation of the behavior of EAs. Particularly, a simple multiobjective EA, that is, GSEMO, has been shown to be able to achieve good polynomial-time approximation guarantees for submodular optimization, where the objective function is only required to satisfy some properties and its explicit formulation is not needed. Submodular optimization has wide applications in diverse areas, and previous studies have considered the cases where the objective functions are monotone submodular, monotone non-submodular, or non-monotone submodular. To complement this line of research, this article studies the problem class of maximizing monotone approximately submodular minus modular functions (i.e., g-c) with a size constraint, where g is a so-called non-negative monotone approximately submodular function and c is a so-called non-negative modular function, resulting in the objective function (g-c) being non-monotone non-submodular in general. Different from previous analyses, we prove that by optimizing the original objective function (g-c) and the size simultaneously, the GSEMO fails to achieve a good polynomial-time approximation guarantee. However, we also prove that by optimizing a distorted objective function and the size simultaneously, the GSEMO can still achieve the best-known polynomial-time approximation guarantee. Empirical studies on the applications of Bayesian experimental design and directed vertex cover show the excellent performance of the GSEMO.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available