Journal
IEEE JOURNAL OF SOLID-STATE CIRCUITS
Volume 53, Issue 4, Pages 983-994Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSSC.2017.2778702
Keywords
Binary neural networks; in-memory processing; near-memory processing; neural networks; reconfigurable array; ternary neural networks
Categories
Funding
- JST ACCEL, Japan [JPM-JAC1502]
- Grants-in-Aid for Scientific Research [25110015] Funding Source: KAKEN
Ask authors/readers for more resources
A versatile reconfigurable accelerator architecture for binary/ternary deep neural networks is presented. In-memory neural network processing without any external data accesses, sustained by the symmetry and simplicity of the computation of the binary/ternaty neural network, improves the energy efficiency dramatically. The prototype chip is fabricated, and it achieves 1.4 TOPS (tera operations per second) peak performance with 0.6-W power consumption at 400-MHz clock. The application examination is also conducted.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available