4.4 Article

OxRAM plus OTS optimization for binarized neural network hardware implementation

期刊

出版社

IOP Publishing Ltd
DOI: 10.1088/1361-6641/ac31e2

关键词

BNN; resistive RAM; OTS; chalcogenide; crossbar

资金

  1. European commission [ANR-19-P3IA-0003]
  2. French State and Auvergne-Rhone Alpes region through ECSEL project ANDANTE

向作者/读者索取更多资源

This study aims to improve the speed and power consumption of deep learning accelerators by using low-power memristive devices and proposes a denser 1S1R crossbar system. The research achieves successful experimental results and simulations.
Low-power memristive devices embedded on graphics or central processing units logic core are a very promising non-von-Neumann approach to improve significantly the speed and power consumption of deep learning accelerators, enhancing their deployment on embedded systems. Among various non-ideal emerging neuromorphic memory devices, synaptic weight hardware implementation using resistive random-access memories (RRAMs) within 1T1R architectures promises high performance on low precision binarized neural networks (BNN). Taking advantage of the RRAM capabilities and allowing to substantially improve the density thanks to the ovonic threshold selector (OTS) selector, this work proposes to replace the standard 1T1R architecture with a denser 1S1R crossbar system, where an HfO2-based resistive oxide memory (OxRAM) is co-integrated with a Ge-Se-Sb-N-based OTS. In this context, an extensive experimental study is performed to optimize the 1S1R stack and programming conditions for extended read window margin and endurance characteristics. Focusing on the standard machine learning MNIST image recognition task, we perform offline training simulations in order to define the constraints on the devices during the training process. A very promising bit error rate of similar to 10(-3) is demonstrated together with 1S1R 10(4) error-free programming endurance characteristics, fulfilling the requirements for the application of interest. Based on this simulation and experimental study, BNN figures of merit (system footprint, number of weight updates, accuracy, inference speed, electrical consumption per image classification and tolerance to errors) are optimized by engineering the number of learnable parameters of the system. Altogether, an inherent BNN resilience to 1S1R parasitic bit errors is demonstrated.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据