4.6 Article

Representation formulas and pointwise properties for Barron functions

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s00526-021-02156-6

Keywords

-

Funding

  1. iFlytek

Ask authors/readers for more resources

This study provides a detailed investigation of the function space of infinitely wide two-layer neural networks with ReLU activation. Different representation formulas are established. Pointwise properties of these networks are analyzed, demonstrating that certain types of functions cannot be represented by them. Furthermore, the study reveals the properties of specific functions, suggesting that two-layer neural networks may have a greater ability to approximate a variety of functions than commonly believed.
We study the natural function space for infinitely wide two-layer neural networks with ReLU activation (Barron space) and establish different representation formulae. In two cases, we describe the space explicitly up to isomorphism. Using a convenient representation, we study the pointwise properties of two-layer networks and show that functions whose singular set is fractal or curved (for example distance functions from smooth submanifolds) cannot be represented by infinitely wide two-layer networks with finite path-norm. We use this structure theorem to show that the only C-1-diffeomorphisms which preserve Barron space are affine. Furthermore, we show that every Barron function can be decomposed as the sum of a bounded and a positively one-homogeneous function and that there exist Barron functions which decay rapidly at infinity and are globally Lebesgue-integrable. This result suggests that two-layer neural networks may be able to approximate a greater variety of functions than commonly believed.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available