4.1 Article

Computing in Memory Using Doubled STT-MRAM With the Application of Binarized Neural Networks

Journal

IEEE MAGNETICS LETTERS
Volume 14, Issue -, Pages -

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LMAG.2023.3301384

Keywords

Spin electronics; computing in memory; spin-transfer torque magnetic random-access memory; binary/ternary content-addressable memory; resistive-based majority function; binary neural network

Ask authors/readers for more resources

This letter presents a novel computing-in-memory (CiM) architecture based on spin-transfer torque magnetic random-access memory, addressing the processor-memory data transfer bottleneck in data-intensive applications. The architecture uses two spintronic devices per cell to store the main data and its complement, providing reliable Boolean operations, content-addressable memory search, and multi-input majority function. Simulation results show a significant reduction in power-delay product (86%-98%) compared to existing architectures when applied in binary neural network applications.
The computing-in-memory (CiM) approach is a promising option for addressing the processor-memory data transfer bottleneck while performing data-intensive applications. In this letter, we present a novel CiM architecture based on spin-transfer torque magnetic random-access memory, which can work in computing and memory modes. In this letter, two spintronic devices are considered per cell to store the main data and its complement to address the reliability concerns during the read operation, which also provides a fascinating ability for performing reliable Boolean operations (all basic functions), binary/ternary content-addressable memory search operation, and multi-input majority function. Since the developed architecture can perform bitwise xnor operations in one cycle, a resistive-based accumulator has been designed to perform multi-input majority production to improve the structure for implementing fast and low-cost binary neural networks (BNNs). To this end, multiplication, accumulation, and passing through the activation function are accomplished in three cycles. The simulation result of exploiting the architecture in the BNN application indicates 86%-98% lower power-delay product than existing architectures.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available