Journal
ANALYSIS AND APPLICATIONS
Volume 14, Issue 6, Pages 829-848Publisher
WORLD SCIENTIFIC PUBL CO PTE LTD
DOI: 10.1142/S0219530516400042
Keywords
Deep and shallow networks; Gaussian networks; ReLU networks; blessed representation
Categories
Funding
- Center for Brains, Minds and Machines (CBMM)
- NSF STC award [CCF - 1231216]
- ARO [W911NF-15-1-0385]
- McDermott chair
Ask authors/readers for more resources
The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The paper announces new results for a non-smooth activation function - the ReLU function - used in present-day neural networks, as well as for the Gaussian networks. We propose a new definition of relative dimension to encapsulate different notions of sparsity of a function class that can possibly be exploited by deep networks but not by shallow ones to drastically reduce the complexity required for approximation and learning.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available