4.5 Article

Enhancing multi-objective evolutionary neural architecture search with training-free Pareto local search

期刊

APPLIED INTELLIGENCE
卷 53, 期 8, 页码 8654-8672

出版社

SPRINGER
DOI: 10.1007/s10489-022-04032-y

关键词

Neural architecture search; Evolutionary computation; Memetic algorithm; Multi-objective optimization; Synaptic flow

向作者/读者索取更多资源

Neural Architecture Search (NAS) is a multi-objective optimization problem that automates the design process of high-performing neural network architectures. This article introduces a local search method, PSI, to enhance the performance of MOEAs. Experimental results confirm the effectiveness of the proposed method, reducing computational costs significantly.
Neural Architecture Search (NAS), that automates the design process of high-performing neural network architectures, is a multi-objective optimization problem. A single ideal architecture, that optimizes both predictive performance (e.g., the network accuracy) and computational costs (e.g., the model size, the number of parameters, the number of floating-point operations), does not exist. Instead, there is a Pareto front of multiple candidate architectures where each one represents an optimal trade-off between the competing objectives. Multi-Objective Evolutionary Algorithms (MOEAs) are often employed to approximate such Pareto-optimal fronts for NAS problems. In this article, we introduce a local search method, namely Potential Solution Improving (PSI), that aims to improve certain potential solutions on approximation fronts to enhance the performance of MOEAs. The main bottleneck in NAS is the considerable computation cost that incurs from having to train a large number of candidate architectures to evaluate their accuracy. Recently, the Synaptic Flow has been proposed as a metric that relatively characterizes the performance of deep neural networks without running any training epoch. We thus propose that our PSI method can make use of this training-free metric as a proxy for network accuracy during local search steps. We conduct experiments with the well-known MOEA Non-dominated Sorting Genetic Algorithm II (NSGA-II) coupled with the training-free PSI local search in solving NAS problems created from the standard benchmarks NAS-Bench-101 and NAS-Bench-201. Experimental results confirm the efficiency enhancements brought about by our proposed method, which reduces the computational cost by four times compared to the baseline approach. The source code for the experiments in the article can be found at: https://github.com/ELO-Lab/MOENAS-TF-PSI.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据