4.7 Article

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

期刊

NEURAL NETWORKS
卷 108, 期 -, 页码 296-330

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2018.08.019

关键词

Deep neural networks; Piecewise smooth functions; Function approximation; Sparse connectivity; Metric entropy; Curse of dimension

资金

  1. European Commission, Germany-Project DEDALE within the H2020 Framework [665044]
  2. DFG Collaborative Research Center, Germany [TRR 109]

向作者/读者索取更多资源

Westudy the necessary and sufficient complexity of ReLU neural networks - in terms of depth and number of weights - which is required for approximating classifier functions in an L-p-sense. As a model class, we consider the set epsilon(beta) (R-d) of possibly discontinuous piecewise C-beta functions f : [-1/2, 1/2](d). R, where the different smooth regions'' of f are separated by C-beta hypersurfaces. For given dimension d >= 2, regularity beta > 0, and accuracy epsilon > 0, we construct artificial neural networks with ReLU activation function that approximate functions from epsilon beta(R-d) up to an L-2 error of e. The constructed networks have a fixed number of layers, depending only on d and beta, and they have O(epsilon (2(d- 1)/ beta)) many nonzero weights, which we prove to be optimal. For the proof of optimality, we establish a lower bound on the description complexity of the class epsilon(beta) (R-d). By showing that a family of approximating neural networks gives rise to an encoder for epsilon(beta)(R-d), we then prove that one cannot approximate a general function f is an element of epsilon(beta) (R-d) using neural networks that are less complex than those produced by our construction. In addition to the optimality in terms of the number of weights, we show that in order to achieve this optimal approximation rate, one needs ReLU networks of a certain minimal depth. Precisely, for piecewise C-beta(R-d) functions, this minimal depth is given - up to a multiplicative constant - by beta/d. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function f to be approximated can be factorized into a smooth dimension reducing feature map tau and classifier function g defined on a low-dimensional feature space - as f = g o tau. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension. (C) 2018 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据