4.6 Article

Neuroevolution-Based Efficient Field Effect Transistor Compact Device Models

Journal

IEEE ACCESS
Volume 9, Issue -, Pages -

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3130254

Keywords

Metal oxide semiconductor (MOS); machine learning; neuroevolution; semiconductor device compact model

Funding

  1. Ministry of Science and Technology (MOST), Taiwan [MOST 110-2221-E-A49-143]
  2. Taiwan Semiconductor Research Institute (TSRI), Taiwan

Ask authors/readers for more resources

Artificial neural networks and multilayer perceptrons are efficient in designing semiconductor device models, but require a large number of parameters and longer simulation time. Optimizing network architecture for better learning is important yet tedious. Neuro-evolution method can achieve lower RMSE and faster convergence for semiconductor device compact models compared to traditional MLP models.
Artificial neural networks (ANN) and multilayer perceptrons (MLP) have proved to be efficient in terms of designing highly accurate semiconductor device compact models (CM). Their ability to update their weight and biases through the backpropagation method makes them highly useful in learning the task. To improve the learning, MLP usually requires large networks and thus a large number of model parameters, which significantly increases the simulation time in circuit simulation. Hence, optimizing the network architecture and topology is always a tedious yet important task. In this work, we tune the network topology using neuro-evolution (NE) to develop semiconductor device CMs. With input and output layers defined, we have allowed a genetic algorithm (GA), a gradient-free algorithm, to tune the network architecture in combination with Adam, a gradient-based backpropagation algorithm, for the network weight and bias optimization. In addition, we implemented the MLP model using a similar number of parameters as the baseline for comparison. It is observed that in most of the cases, the NE models exhibit a lower root mean square error (RMSE) and require fewer training epochs compared to the MLP baseline models. For instance, for patience number 100 with different number of model parameters, the RMSE for test dataset using NE and MLP in unit of log(ampere) are 0.1461, 0.0985, 0.1274, 0.0971, 0.0705, and 0.2254, 0.1423, 0.1429, 0.1425, 0.1391, respectively, for the 28nm technology node at foundry. The code is available at Github.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available