4.6 Article

Training neural networks from an ergodic perspective

Journal

OPTIMIZATION
Volume -, Issue -, Pages -

Publisher

TAYLOR & FRANCIS LTD
DOI: 10.1080/02331934.2023.2239852

Keywords

Gradient descent; training neural network; global attractor; ergodic; >

Ask authors/readers for more resources

In this research, neural network weights are interpreted as points in a metric space, with the training process viewed as an iterated function system on this space. The study found that starting with initial weights close to the minimum error yields the most effective training method, and provided an ergodic characterization of this efficient training. The findings suggest potential for further optimization advancements through numerical experimentation and the study of dynamical systems theory.
In this research, we viewed the weights of a neural network as points in a metric space. We proposed that the process of training a neural network can be seen as an iterated function system on this space. Additionally, we found that the most effective training method involves starting with initial weights that are very close to the minimum error. We also provided an ergodic characterization of this efficient training. Our findings suggest that there is potential for further optimization advancements through numerical experimentation and the study of dynamical systems theory.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available