4.6 Article

Training neural networks for solving 1-D optimal piecewise linear approximation

Journal

NEUROCOMPUTING
Volume 508, Issue -, Pages 275-283

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2022.07.025

Keywords

Deep learning; Neural networks; Interpretability; Piecewise linear models; Optimal approximation

Ask authors/readers for more resources

In this work, the authors study the 1-D optimal piecewise linear approximation problem and propose a lattice neural network (LNN) method to solve it. They characterize the optimal solution and demonstrate the competitiveness of the LNN method through experiments.
Recently, the interpretability of deep learning has attracted a lot of attention. A plethora of methods have attempted to explain neural networks by feature visualization, saliency maps, model distillation, and so on. However, it is hard for these methods to reveal the intrinsic properties of neural networks. In this work, we studied the 1-D optimal piecewise linear approximation (PWLA) problem and associated it with a designed neural network, named lattice neural network (LNN). We asked four essential questions as following: (1) What are the characters of the optimal solution of the PWLA problem? (2) Can an LNN converge to the global optimum? (3) Can an LNN converge to the local optimum? (4) Can an LNN solve the PWLA problem? Our main contributions are that we propose the theorems to characterize the optimal solution of the PWLA problem and present the LNN method for solving it. We evaluated the proposed LNNs on approximation tasks, forged an empirical method to improve the performance of LNNs. The experiments verified that our LNN method is competitive with the start-of-the-art method.(c) 2022 Published by Elsevier B.V.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available