4.8 Article

An in-memory computing architecture based on two-dimensional semiconductors for multiply-accumulate operations

期刊

NATURE COMMUNICATIONS
卷 12, 期 1, 页码 -

出版社

NATURE PORTFOLIO
DOI: 10.1038/s41467-021-23719-3

关键词

-

资金

  1. National Key Research and Development Program [2016YFA0203900]
  2. Shanghai Municipal Science and Technology Commission [18JC1410300]
  3. Innovation Program of Shanghai Municipal Education Commission [2021-01-07-00-07-E00077]
  4. National Natural Science Foundation of China [61925402, 61851402, 62090032, 61874031]

向作者/读者索取更多资源

The research proposes a new circuit architecture that utilizes MoS2 transistors for efficient in-memory computing, enabling high-capacity MAC operations in a small area. By storing multi-level voltages and combining analog computation, tasks such as image recognition are achieved.
In-memory computing may enable multiply-accumulate (MAC) operations, which are the primary calculations used in artificial intelligence (AI). Performing MAC operations with high capacity in a small area with high energy efficiency remains a challenge. In this work, we propose a circuit architecture that integrates monolayer MoS2 transistors in a two-transistor-one-capacitor (2T-1C) configuration. In this structure, the memory portion is similar to a 1T-1C Dynamic Random Access Memory (DRAM) so that theoretically the cycling endurance and erase/write speed inherit the merits of DRAM. Besides, the ultralow leakage current of the MoS2 transistor enables the storage of multi-level voltages on the capacitor with a long retention time. The electrical characteristics of a single MoS2 transistor also allow analog computation by multiplying the drain voltage by the stored voltage on the capacitor. The sum-of-product is then obtained by converging the currents from multiple 2T-1C units. Based on our experiment results, a neural network is ex-situ trained for image recognition with 90.3% accuracy. In the future, such 2T-1C units can potentially be integrated into three-dimensional (3D) circuits with dense logic and memory layers for low power in-situ training of neural networks in hardware. In standard computing architectures, memory and logic circuits are separated, a feature that slows matrix operations vital to deep learning algorithms. Here, the authors present an alternate in-memory architecture and demonstrate a feasible approach for analog matrix multiplication.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据