4.6 Article

Feed-Forward learning algorithm for resistive memories

期刊

JOURNAL OF SYSTEMS ARCHITECTURE
卷 131, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.sysarc.2022.102730

关键词

Feed-Forward network; Memristor; Neural network; Resistive memory; Training algorithm

向作者/读者索取更多资源

This paper introduces a training algorithm based on resistive memory systems that achieves accuracy similar to existing algorithms, but with faster training speed, by using additional memristors and a threshold gate.
Resistive memory systems, due to their inherent ability to perform Vector-Matrix Multiplication (VMM), have drawn the attention of researchers to realize machine learning applications with low overheads. In resistive memory systems, each memory cell (synapse/neuron) stores a weight in the form of resistance/conductance value. Memristor-based resistive memory has been widely explored in this regard because of its small size and low power consumption. The inference quality of a neural network depends on how efficiently and accurately the weights are stored in the synapses. The weights are calculated using various training algorithms, like back-propagation (BP), least mean square (LMS), and random weight change (RWC). The training accuracy of existing algorithms is directly related to the algorithm complexity and the time devoted for training. This paper presents a training algorithm that requires an additional set of memristors and a threshold gate for training and achieves an accuracy similar to existing algorithms without using any complex circuitry. The method can update synapse weights in parallel and requires fewer epochs for training an application. Results on experiments with standard benchmarks reveal that the method can achieve an average speedup of 38x as compared to state-of-the-art methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据