4.8 Article

A CMOS-integrated compute-in-memory macro based on resistive random-access memory for AI edge devices

Journal

NATURE ELECTRONICS
Volume 4, Issue 1, Pages 81-90

Publisher

NATURE PORTFOLIO
DOI: 10.1038/s41928-020-00505-5

Keywords

-

Funding

  1. NVM-DTP of TSMC
  2. NTHU
  3. MOST-Taiwan
  4. TSMC-NTHU JDP

Ask authors/readers for more resources

The development of small, energy-efficient artificial intelligence edge devices has been limited by data transfer requirements between the processor and memory in traditional computing architectures. Non-volatile compute-in-memory (nvCIM) architectures show potential to overcome these limitations, but challenges still remain in developing configurations for high-bit-precision dot-product operations.
Commercial complementary metal-oxide-semiconductor and resistive random-access memory technologies can be used to create multibit compute-in-memory circuits capable of fast and energy-efficient inference for use in small artificial intelligence edge devices. The development of small, energy-efficient artificial intelligence edge devices is limited in conventional computing architectures by the need to transfer data between the processor and memory. Non-volatile compute-in-memory (nvCIM) architectures have the potential to overcome such issues, but the development of high-bit-precision configurations required for dot-product operations remains challenging. In particular, input-output parallelism and cell-area limitations, as well as signal margin degradation, computing latency in multibit analogue readout operations and manufacturing challenges, still need to be addressed. Here we report a 2 Mb nvCIM macro (which combines memory cells and related peripheral circuitry) that is based on single-level cell resistive random-access memory devices and is fabricated in a 22 nm complementary metal-oxide-semiconductor foundry process. Compared with previous nvCIM schemes, our macro can perform multibit dot-product operations with increased input-output parallelism, reduced cell-array area, improved accuracy, and reduced computing latency and energy consumption. The macro can, in particular, achieve latencies between 9.2 and 18.3 ns, and energy efficiencies between 146.21 and 36.61 tera-operations per second per watt, for binary and multibit input-weight-output configurations, respectively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available