期刊
IEEE TRANSACTIONS ON NEURAL NETWORKS
卷 21, 期 11, 页码 1793-1803出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNN.2010.2073482
关键词
Forward-only computation; Levenberg-Marquardt algorithm; neural network training
The method introduced in this paper allows for training arbitrarily connected neural networks, therefore, more powerful neural network architectures with connections across layers can be efficiently trained. The proposed method also simplifies neural network training, by using the forward-only computation instead of the traditionally used forward and backward computation. Information needed for the gradient vector (for first-order algorithms) and Jacobian or Hessian matrix (for second-order algorithms) is obtained during forward computation. With the proposed algorithm, it is now possible to solve the same problems using a much smaller number of neurons because the proposed algorithm is able to train more complex neural network architectures that require a smaller number of neurons. Comparison results of computation cost show that the proposed forward-only computation can be faster than the traditional implementation of the Levenberg-Marquardt algorithm.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据