4.6 Article

Sobolev trained neural network surrogate models for optimization

期刊

COMPUTERS & CHEMICAL ENGINEERING
卷 153, 期 -, 页码 -

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.compchemeng.2021.107419

关键词

Black-box optimization; Grey-box optimization; Gradient-enhanced surrogate models

资金

  1. Engineering & Physical Sciences Research Council (EPSRC) Research Fellowship [EP/T001577/1]
  2. Imperial College Research Fellowship

向作者/读者索取更多资源

This paper examines the direct impacts of Sobolev training on neural network surrogate models embedded in optimization problems and proposes a systematic strategy for scaling Sobolev-space targets during NN training. It is shown that Sobolev training results in surrogate models with more accurate derivatives, with direct benefits in gradient-based optimization. The advantages of Sobolev training are especially significant in cases of low data volume and/or optimal points near the boundary of the training dataset.
Neural network surrogate models are often used to replace complex mathematical models in black-box and grey-box optimization. This strategy essentially uses samples generated from a complex model to fit a data-driven, reduced-order model more amenable for optimization. Neural network models can be trained in Sobolev spaces, i.e., models are trained to match the complex function not only in terms of output values, but also the values of their derivatives to arbitrary degree. This paper examines the direct impacts of Sobolev training on neural network surrogate models embedded in optimization problems, and proposes a systematic strategy for scaling Sobolev-space targets during NN training. In particular, it is shown that Sobolev training results in surrogate models with more accurate derivatives (in addition to more accurately predicting outputs), with direct benefits in gradient-based optimization. Three case studies demonstrate the approach: black-box optimization of the Himmelblau function, and grey-box op-timizations of a two-phase flash separator and two flashes in series. The results show that the advantages of Sobolev training are especially significant in cases of low data volume and/or optimal points near the boundary of the training dataset-areas where NN models traditionally struggle. (c) 2021 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据