4.7 Article

Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

期刊

MATHEMATICS
卷 7, 期 10, 页码 -

出版社

MDPI
DOI: 10.3390/math7100992

关键词

Deep Neural Nets; ReLU Networks; Approximation Theory

资金

  1. NSF [DMS-1855684, CCF-1934904]

向作者/读者索取更多资源

This article concerns the expressive power of depth in neural nets with ReLU activations and a bounded width. We are particularly interested in the following questions: What is the minimal width w(min) (d) so that ReLU nets of width w(min) (d) (and arbitrary depth) can approximate any continuous function on the unit cube [0, 1](d) arbitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? We obtain an essentially complete answer to these questions for convex functions. Our approach is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well suited to represent convex functions. In particular, we prove that ReLU nets with width d + 1 can approximate any continuous convex function of d variables arbitrarily well. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the d-dimensional cube [0, 1](d) by ReLU nets with width d + 3.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据