Journal
NATURE COMMUNICATIONS
Volume 12, Issue 1, Pages -Publisher
NATURE PORTFOLIO
DOI: 10.1038/s41467-021-25801-2
Keywords
-
Categories
Funding
- United States Air Force AFRL/SBRK [FA864921P0087]
- ARO [N68164-EG]
- DARPA
Ask authors/readers for more resources
Reservoir computers are artificial neural networks that can be trained on small data sets with large random matrices and numerous metaparameters. Nonlinear vector autoregression is a superior machine learning algorithm compared to reservoir computing, requiring fewer training data sets and training time.
Reservoir computers are artificial neural networks that can be trained on small data sets, but require large random matrices and numerous metaparameters. The authors propose an improved reservoir computer that overcomes these limitations and shows advantageous performance for complex forecasting tasks Reservoir computing is a best-in-class machine learning algorithm for processing information generated by dynamical systems using observed time-series data. Importantly, it requires very small training data sets, uses linear optimization, and thus requires minimal computing resources. However, the algorithm uses randomly sampled matrices to define the underlying recurrent neural network and has a multitude of metaparameters that must be optimized. Recent results demonstrate the equivalence of reservoir computing to nonlinear vector autoregression, which requires no random matrices, fewer metaparameters, and provides interpretable results. Here, we demonstrate that nonlinear vector autoregression excels at reservoir computing benchmark tasks and requires even shorter training data sets and training time, heralding the next generation of reservoir computing.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available