4.6 Review

Analog architectures for neural network acceleration based on non-volatile memory

Journal

APPLIED PHYSICS REVIEWS
Volume 7, Issue 3, Pages -

Publisher

AIP Publishing
DOI: 10.1063/1.5143815

Keywords

-

Funding

  1. Sandia's Laboratory-Directed Research and Development program
  2. U.S. Department of Energy's National Nuclear Security Administration [DE-NA0003525]

Ask authors/readers for more resources

Analog hardware accelerators, which perform computation within a dense memory array, have the potential to overcome the major bottlenecks faced by digital hardware for data-heavy workloads such as deep learning. Exploiting the intrinsic computational advantages of memory arrays, however, has proven to be challenging principally due to the overhead imposed by the peripheral circuitry and due to the non-ideal properties of memory devices that play the role of the synapse. We review the existing implementations of these accelerators for deep supervised learning, organizing our discussion around the different levels of the accelerator design hierarchy, with an emphasis on circuits and architecture. We explore and consolidate the various approaches that have been proposed to address the critical challenges faced by analog accelerators, for both neural network inference and training, and highlight the key design trade-offs underlying these techniques.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available