4.7 Article

Adaptive natural gradient learning algorithms for various stochastic models

期刊

NEURAL NETWORKS
卷 13, 期 7, 页码 755-764

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/S0893-6080(00)00051-4

关键词

feedforward neural network; gradient descent learning; plateau problem; natural gradient learning; adaptive natural gradient learning

向作者/读者索取更多资源

The natural gradient method has an ideal dynamic behavior which resolves the slow learning speed of the standard gradient descent method caused by plateaus. However, it is required to calculate the Fisher information matrix and its inverse, which makes the implementation of the natural gradient almost impossible. To solve this problem, a preliminary study has been proposed concerning an adaptive method of calculating an estimate of the inverse of the Fisher information matrix, which is called the adaptive natural gradient learning method. In this paper, we show that the adaptive natural gradient method can be extended to be applicable to a wide class of stochastic models: regression with an arbitrary noise model and classification with an arbitrary number of classes. We give explicit forms of the adaptive natural gradient for these models. We confirm the practical advantage of the proposed algorithms through computational experiments on benchmark problems. (C) 2000 Elsevier Science Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据