4.6 Article

Energy Efficient Learning With Low Resolution Stochastic Domain Wall Synapse for Deep Neural Networks

期刊

IEEE ACCESS
卷 10, 期 -, 页码 84946-84959

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3196688

关键词

Domain wall; synapse; quantized weight; deep neural network; energy efficient; neuromorphic; in-memory computing

资金

  1. National Science Foundation (NSF) [ECCS 1954589, CCF 1815033]
  2. Virginia Commonwealth Cyber Initiative (CCI) CCI Cybersecurity Research Collaboration Grant

向作者/读者索取更多资源

We demonstrate that extremely low resolution quantized synapses with large stochastic variations in synaptic weights can achieve high testing accuracies comparable to Deep Neural Networks (DNNs) with floating-point precision synaptic weights. We propose in-situ and ex-situ training algorithms based on modified algorithms and train 5-layer DNNs on MNIST dataset using 2-, 3- and 5-state DW devices as synapses. The highest inference accuracies obtained after in-situ and ex-situ training are close to 96.67% and 96.63% respectively, which is similar to the baseline accuracy of 97.1% obtained from DNN with floating-point precision weights. Our proposed approach demonstrates potential energy savings of at least two orders of magnitude compared to the floating-point approach implemented in CMOS. It is particularly attractive for low power intelligent edge devices.
We demonstrate extremely low resolution quantized (nominally 5-state) synapses with large stochastic variations in synaptic weights can be energy efficient and achieve reasonably high testing accuracies compared to Deep Neural Networks (DNNs) of similar sizes using floating-point precision synaptic weights. Specifically, voltage-controlled domain wall (DW) devices demonstrate stochastic behavior and can only encode limited states; however, they are extremely energy efficient during both training and inference. In this study, we propose both in-situ and ex-situ training algorithms, based on modification of the algorithm proposed by Hubara et al., 2017 which works well with quantization of synaptic weights, and train several 5-layer DNNs on MNIST dataset using 2-, 3- and 5-state DW devices as a synapse. For insitu training, a separate high precision memory unit preserves and accumulates the weight gradients which prevents accuracy loss due to weight quantization. For ex-situ training, a precursor DNN is first trained based on weight quantization and DW device model. Moreover, a noise tolerance margin is included in both of the training methods to account for the intrinsic device noise. The highest inference accuracies we obtain after in-situ and ex-situ training are similar to 96.67% and similar to 96.63%, respectively, which is very close to the baseline accuracy of similar to 97.1% obtained from a similar topology DNN having floating-point precision weights with no stochasticity. Large inter-state intervals due to quantized weights and noise tolerance margin enables in-situ training with significantly lower number of programming attempts. Our proposed approach demonstrates a possibility of at least two orders of magnitude energy savings compared to the floating-point approach implemented in CMOS. This approach is specifically attractive for low power intelligent edge devices where the ex-situ learning can be utilized for energy efficient non-adaptive tasks and the in-situ learning can provide the opportunity to adapt and learn in a dynamically evolving environment.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据