4.5 Article

Approximation of Lipschitz Functions Using Deep Spline Neural Networks*

Journal

SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE
Volume 5, Issue 2, Pages 306-322

Publisher

SIAM PUBLICATIONS
DOI: 10.1137/22M1504573

Keywords

deep learning; learnable activations; universality; robustness; Lipschitz continuity; linear splines

Ask authors/readers for more resources

Although Lipschitz-constrained neural networks are widely used in machine learning, designing and training expressive Lipschitz-constrained networks is challenging. To overcome the disadvantages of rectified linear-unit networks, we propose using learnable spline activation functions with three linear regions or more. We prove that our choice is universal among all 1-Lipschitz activation functions and can approximate a larger class of functions compared to other weight-constrained architectures. Our choice is also at least as expressive as the non-componentwise Groupsort activation function for spectral-norm-constrained weights. The theoretical findings align with prior numerical results.
Although Lipschitz-constrained neural networks have many applications in machine learning, the design and training of expressive Lipschitz-constrained networks is very challenging. Since the popular rectified linear-unit networks have provable disadvantages in this setting, we propose using learnable spline activation functions with at least three linear regions instead. We prove that our choice is universal among all componentwise 1-Lipschitz activation functions in the sense that no other weight-constrained architecture can approximate a larger class of functions. Additionally, our choice is at least as expressive as the recently introduced non-componentwise Groupsort activation function for spectral-norm-constrained weights. The theoretical findings of this paper are consistent with previously published numerical results.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available