4.6 Article

Synapse-Neuron-Aware Training Scheme of Defect-Tolerant Neural Networks with Defective Memristor Crossbars

Journal

MICROMACHINES
Volume 13, Issue 2, Pages -

Publisher

MDPI
DOI: 10.3390/mi13020273

Keywords

synapse-neuron-aware training; defect-tolerant neural networks; defective memristor crossbars; memristor defects; neuromorphic

Funding

  1. [NRF-2015R1A5A7037615]
  2. [NRF-2019K1A3A1A 25000279]
  3. [NRF-2021R1A2C1011631]
  4. [NRF-2021M3F3A2A01037972]
  5. [SRFC-TA1903-01]

Ask authors/readers for more resources

This paper proposes a new crossbar training scheme that optimizes the defect map size and neural network performance. By dividing the columns of the memristor crossbar into different groups and combining synapse-aware and neuron-aware training methods, the proposed scheme can improve network performance while minimizing hardware burden.
To overcome the limitations of CMOS digital systems, emerging computing circuits such as memristor crossbars have been investigated as potential candidates for significantly increasing the speed and energy efficiency of next-generation computing systems, which are required for implementing future AI hardware. Unfortunately, manufacturing yield still remains a serious challenge in adopting memristor-based computing systems due to the limitations of immature fabrication technology. To compensate for malfunction of neural networks caused from the fabrication-related defects, a new crossbar training scheme combining the synapse-aware with the neuron-aware together is proposed in this paper, for optimizing the defect map size and the neural network's performance simultaneously. In the proposed scheme, the memristor crossbar's columns are divided into 3 groups, which are the severely-defective, moderately-defective, and normal columns, respectively. Here, each group is trained according to the trade-off relationship between the neural network's performance and the hardware overhead of defect-tolerant training. As a result of this group-based training method combining the neuron-aware with the synapse-aware, in this paper, the new scheme can be successful in improving the network's performance better than both the synapse-aware and the neuron-aware while minimizing its hardware burden. For example, when testing the defect percentage = 10% with MNIST dataset, the proposed scheme outperforms the synapse-aware and the neuron-aware by 3.8% and 3.4% for the number of crossbar's columns trained for synapse defects = 10 and 138 among 310, respectively, while maintaining the smaller memory size than the synapse-aware. When the trained columns = 138, the normalized memory size of the synapse-neuron-aware scheme can be smaller by 3.1% than the synapse-aware.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available