4.3 Article

SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks

Journal

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3476999

Keywords

Chiplet architecture; in-memory compute; DNN acceleration; IMC benchmarking; network-on-chip; network-on-package

Funding

  1. C-BRIC, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program - DARPA
  2. C-BRIC, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program - SRC GRC Task [3012.001]

Ask authors/readers for more resources

This study introduces a new benchmarking simulator, SIAM, to evaluate the performance of chiplet-based IMC architectures and explore the potential of this paradigm shift in IMC architecture design.
In-memory computing (IMC) on a monolithic chip for deep learning faces dramatic challenges on area, yield, and on-chip interconnection cost due to the ever-increasing model sizes. 2.5D integration or chiplet-based architectures interconnect multiple small chips (i.e., chiplets) to form a large computing system, presenting a feasible solution beyond a monolithic IMC architecture to accelerate large deep learning models. This paper presents a new benchmarking simulator, SIAM, to evaluate the performance of chiplet-based IMC architectures and explore the potential of such a paradigm shift in IMC architecture design. SIAM integrates device, circuit, architecture, network-on-chip (NoC), network-on-package (NoP), and DRAM access models to realize an end-to-end system. SIAM is scalable in its support of a wide range of deep neural networks (DNNs), customizable to various network structures and configurations, and capable of efficient design space exploration. We demonstrate the flexibility, scalability, and simulation speed of SIAM by benchmarking different state-of-the-art DNNs with CIFAR-10, CIFAR-100, and ImageNet datasets. We further calibrate the simulation results with a published silicon result, SIMBA. The chiplet-based IMC architecture obtained through SIAM shows 130x and 72x improvement in energy-efficiency for ResNet-50 on the ImageNet dataset compared to Nvidia V100 and T4 GPUs.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available