期刊
COMPUTERS & GEOSCIENCES
卷 171, 期 -, 页码 -出版社
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.cageo.2022.105292
关键词
LSRTM; Stochastic optimization; Devito; quasi-Newton
As we move towards larger 3D datasets, it is necessary for inversions based on the wave equation, such as LSRTM, to converge rapidly to reduce computational burden and provide accurate amplitudes. In this study, we explore stochastic optimization methods for LSRTM beyond the state-of-the-art minibatch SGD, and apply a second-order stochastic method known as the SFO algorithm. Our experiments using synthetic data demonstrate that the SFO algorithm outperforms minibatch SGD in terms of results and convergence speed.
As we keep moving towards large-scale 3D datasets, inversions based on the wave equation, such as least-squares reverse time migration (LSRTM), must rapidly converge to reduce the computational burden and deliver its well-known advantages, compensating the illumination and providing accurate amplitudes. For that, we explore stochastic optimization methods for LSRTM beyond the minibatch stochastic gradient descent (SGD), considered the state of the art in large-scale machine learning problems. We apply a second-order stochastic method to the LSRTM problem, that uses a quasi-Newton approach, known as the Sum of Functions Optimizer (SFO) algorithm. This method also works well with minibatches of data and has shown good performance on optimization of multi-layer neural networks. It can maintain computational tractability and limit memory requirements even for high dimensional optimization problems. As typical for quasi-Newton methods no adjustment of hyperparameters is required. On our experiments presented here, using synthetic data, we demonstrated that the SFO algorithm shows better results and faster convergence than the minibatch SGD.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据