Journal
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
Volume 119, Issue 12, Pages -Publisher
NATL ACAD SCIENCES
DOI: 10.1073/pnas.2107151119
Keywords
stability and accuracy; AI and deep learning; inverse problems; Smale's 18th problem; solvability complexity index hierarchy
Categories
Funding
- Research Fellowship at Trinity College, Cambridge University
- Leverhulme Prize
- Royal Society University Research Fellowship
Ask authors/readers for more resources
Despite the existence of stable neural networks, current deep learning methods often suffer from instability. This study demonstrates the existence of well-conditioned problems in scientific computing where neural networks with good approximation qualities can be proven to exist, but there is currently no algorithm that can train or compute such networks.
Deep learning (DL) has had unprecedented success and is now entering scientific computing with full force. However, current DL methods typically suffer from instability, even when universal approximation properties guarantee the existence of stable neural networks (NNs). We address this paradox by demonstrating basic well-conditioned problems in scientific computing where one can prove the existence of NNs with great approximation qualities; however, there does not exist any algorithm, even randomized, that can train (or compute) such a NN. For any positive integers K > 2 and L, there are cases where simultaneously 1) no randomized training algorithm can compute a NN correct to K digits with probability greater than 1/2; 2) there exists a deterministic training algorithm that computes a NN with K - 1 correct digits, but any such (even randomized) algorithm needs arbitrarily many training data; and 3) there exists a deterministic training algorithm that computes a NN with K - 2 correct digits using no more than L training samples. These results imply a classification theory describing conditions under which (stable) NNs with a given accuracy can be computed by an algorithm. We begin this theory by establishing sufficient conditions for the existence of algorithms that compute stable NNs in inverse problems. We introduce fast iterative restarted networks (FIRENETs), which we both prove and numerically verify are stable. Moreover, we prove that only O(vertical bar log(epsilon)vertical bar) layers are needed for an 6-accurate solution to the inverse problem.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available