4.5 Article

Deep vs. shallow networks: An approximation theory perspective

Journal

ANALYSIS AND APPLICATIONS
Volume 14, Issue 6, Pages 829-848

Publisher

WORLD SCIENTIFIC PUBL CO PTE LTD
DOI: 10.1142/S0219530516400042

Keywords

Deep and shallow networks; Gaussian networks; ReLU networks; blessed representation

Funding

  1. Center for Brains, Minds and Machines (CBMM)
  2. NSF STC award [CCF - 1231216]
  3. ARO [W911NF-15-1-0385]
  4. McDermott chair

Ask authors/readers for more resources

The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The paper announces new results for a non-smooth activation function - the ReLU function - used in present-day neural networks, as well as for the Gaussian networks. We propose a new definition of relative dimension to encapsulate different notions of sparsity of a function class that can possibly be exploited by deep networks but not by shallow ones to drastically reduce the complexity required for approximation and learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available