4.6 Article

Training a Probabilistic Graphical Model With Resistive Switching Electronic Synapses

Journal

IEEE TRANSACTIONS ON ELECTRON DEVICES
Volume 63, Issue 12, Pages 5004-5011

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TED.2016.2616483

Keywords

Brain-inspired hardware; cognitive computing; neuromorphic computing; phase change memory(PCM); resistive memory

Funding

  1. SONIC, one of six centers of STARnet, a Semiconductor Research Corporation Program - MARCO
  2. DARPA
  3. NSF Expedition on Computing under Visual Cortex on Silicon [1317470]
  4. Member Companies of the Stanford Non-Volatile Memory Technology Research Initiative
  5. Stanford SystemX Alliance
  6. Division of Computing and Communication Foundations
  7. Direct For Computer & Info Scie & Enginr [1317407] Funding Source: National Science Foundation
  8. Division of Computing and Communication Foundations
  9. Direct For Computer & Info Scie & Enginr [1317470] Funding Source: National Science Foundation

Ask authors/readers for more resources

Current large-scale implementations of deep learning and data mining require thousands of processors, massive amounts of off-chip memory, and consume giga-joules of energy. New memory technologies, such as nanoscale two-terminal resistive switching memory devices, offer a compact, scalable, and low-power alternative that permits on-chip colocated processing and memory in fine-grain distributed parallel architecture. Here, we report the first use of resistive memory devices for implementing and training a restricted Boltzmann machine (RBM), a generative probabilistic graphical model as a key component for unsupervised learning in deep networks. We experimentally demonstrate a 45-synapse RBM realized with 90 resistive phase change memory (PCM) elements trained with a bioinspired variant of the contrastive divergence algorithm, implementing Hebbian and anti-Hebbian weight updates. The resistive PCM devices show a twofold to tenfold reduction in error rate in a missing pixel pattern completion task trained over 30 epochs, compared with untrained case. Measured programming energy consumption is 6.1 nJ per epoch with the PCM devices, a factor of similar to 150 times lower than the conventional processor-memory systems. We analyze and discuss the dependence of learning performance on cycle-to-cycle variations and number of gradual levels in the PCM analog memory devices.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available