4.8 Article

Teaching recurrent neural networks to infer global temporal structure from local examples

Journal

NATURE MACHINE INTELLIGENCE
Volume 3, Issue 4, Pages 316-323

Publisher

SPRINGERNATURE
DOI: 10.1038/s42256-021-00321-2

Keywords

-

Funding

  1. John D. and Catherine T. MacArthur Foundation
  2. Alfred P. Sloan Foundation
  3. ISI Foundation
  4. Paul Allen Foundation
  5. Army Research Laboratory [W911NF-10-2-0022]
  6. Army Research Office [W911NF-14-1-0679, W911NF-16-1-0474, W911NF-17-2-0181]
  7. Office of Naval Research (ONR)
  8. National Institute of Mental Health [2-R01-DC-009209-11, R01-MH112847, R01-MH107235, R21-M MH-106799]
  9. National Institute of Child Health and Human Development [1R01HD086888-01]
  10. National Institute of Neurological Disorders and Stroke [R01 NS099348]
  11. National Science Foundation (NSF) [DGE-1321851, NSF PHY-1554488, BCS-1631550]

Ask authors/readers for more resources

Computational systems are designed to store and manipulate information, while neurobiological systems adapt to perform similar functions without explicit engineering. Recent research has shown that recurrent neural networks (RNNs) can learn to modify complex information representations using examples and control signals, allowing for continuous interpolation and extrapolation of these representations far beyond the training data. Furthermore, RNNs can infer bifurcation structures and chaos routes, as well as extrapolate non-dynamical trajectories.
The ability to store and manipulate information is a hallmark of computational systems. Whereas computers are carefully engineered to represent and perform mathematical operations on structured data, neurobiological systems adapt to perform analogous functions without needing to be explicitly engineered. Recent efforts have made progress in modelling the representation and recall of information in neural systems. However, precisely how neural systems learn to modify these representations remains far from understood. Here, we demonstrate that a recurrent neural network (RNN) can learn to modify its representation of complex information using only examples, and we explain the associated learning mechanism with new theory. Specifically, we drive an RNN with examples of translated, linearly transformed or pre-bifurcated time series from a chaotic Lorenz system, alongside an additional control signal that changes value for each example. By training the network to replicate the Lorenz inputs, it learns to autonomously evolve about a Lorenz-shaped manifold. Additionally, it learns to continuously interpolate and extrapolate the translation, transformation and bifurcation of this representation far beyond the training data by changing the control signal. Furthermore, we demonstrate that RNNs can infer the bifurcation structure of normal forms and period doubling routes to chaos, and extrapolate non-dynamical, kinematic trajectories. Finally, we provide a mechanism for how these computations are learned, and replicate our main results using a Wilson-Cowan reservoir. Together, our results provide a simple but powerful mechanism by which an RNN can learn to manipulate internal representations of complex information, enabling the principled study and precise design of RNNs.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available