Journal
IEEE TRANSACTIONS ON NEURAL NETWORKS
Volume 14, Issue 2, Pages 274-281Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNN.2003.809401
Keywords
learning capability; neural-network modularity; storage capacity; two-hidden-layer feedforward networks (TLFNs)
Ask authors/readers for more resources
The problem of the necessary complexity of neural networks is of interest in applications. In this paper, learning capability and storage capacity of feedforward neural networks are considered. We markedly improve recent results by introducing neural-network modularity logically. This paper rigorously proves in a constructive method that two-hidden-layer. feedforward networks (TLFNs) with 2root(m+2)N (much less thanN) hidden neurons can learn any N distinct samples (x(i), t(i)) with any arbitrarily small error, where m is the required number of output neurons. It implies that the required number of hidden neurons needed in feedforward networks can be decreased significantly, comparing with previous results-Conversely, a TLFN with Q hidden neurons can store at least Q(2)/4(m + 2) any distinct data (X-i, t(i)) with any desired precision.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available