4.6 Article Proceedings Paper

A 55-nm, 1.0-0.4V, 1.25-pJ/MAC Time-Domain Mixed-Signal Neuromorphic Accelerator With Stochastic Synapses for Reinforcement Learning in Autonomous Mobile Robots

Journal

IEEE JOURNAL OF SOLID-STATE CIRCUITS
Volume 54, Issue 1, Pages 75-87

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSSC.2018.2881288

Keywords

Accelerator; autonomous robot; edge computing; low power; reinforcement learning (RL); stochastic synapse

Funding

  1. Semiconductor Research Corporation (SRC), through Extremely Energy Efficient Collective Electronics (EXCEL), an SRC-NRI Nanoelectronics Research Initiative [2698.002]
  2. Center for Brain-inspired Computing Enabling Autonomous Intelligence (C-BRIC) [2777.005]

Ask authors/readers for more resources

Reinforcement learning (RL) is a bio-mimetic learning approach, where agents can learn about an environment by performing specific tasks without any human supervision. RL is inspired by behavioral psychology, where agents take actions to maximize a cumulative reward. In this paper, we present an RL neuromorphic accelerator capable of performing obstacle avoidance in a mobile robot at the edge of the cloud. We propose an energy-efficient time-domain mixed-signal (TD-MS) computational framework. In TD-MS computation, we demonstrate that the energy to compute is proportional to the importance of the computation. We leverage the unique properties of stochastic networks and recent advances in Q-learning in the proposed RL implementation. The 55-nm test chip implements RL using a three-layered fully connected neural network and consumes a peak power of 690 mu W.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available