4.3 Article

GANs training: A game and stochastic control approach

向作者/读者索取更多资源

This paper analyzes the difficulties in training generative adversarial networks (GANs) for financial time series and proposes a stochastic control framework for hyper-parameters tuning. It establishes the dynamic programming principle and solves the minimax game by deriving explicit forms for the optimal adaptive learning rate and batch size. Empirical studies demonstrate the superiority of this approach over the standard ADAM method in terms of convergence and robustness.
Training generative adversarial networks (GANs) are known to be difficult, especially for financial time series. This paper first analyzes the well-posedness problem in GANs minimax games and the widely recognized convexity issue in GANs objective functions. It then proposes a stochastic control framework for hyper-parameters tuning in GANs training. The weak form of dynamic programming principle and the uniqueness and the existence of the value function in the viscosity sense for the corresponding minimax game are established. In particular, explicit forms for the optimal adaptive learning rate and batch size are derived and are shown to depend on the convexity of the objective function, revealing a relation between improper choices of learning rate and explosion in GANs training. Finally, empirical studies demonstrate that training algorithms incorporating this adaptive control approach outperform the standard ADAM method in terms of convergence and robustness. From GANs training perspective, the analysis in this paper provides analytical support for the popular practice of clipping, and suggests that the convexity and well-posedness issues in GANs may be tackled through appropriate choices of hyper-parameters.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据