4.6 Article

Neuroevolution-Based Efficient Field Effect Transistor Compact Device Models

期刊

IEEE ACCESS
卷 9, 期 -, 页码 -

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3130254

关键词

Metal oxide semiconductor (MOS); machine learning; neuroevolution; semiconductor device compact model

资金

  1. Ministry of Science and Technology (MOST), Taiwan [MOST 110-2221-E-A49-143]
  2. Taiwan Semiconductor Research Institute (TSRI), Taiwan

向作者/读者索取更多资源

Artificial neural networks and multilayer perceptrons are efficient in designing semiconductor device models, but require a large number of parameters and longer simulation time. Optimizing network architecture for better learning is important yet tedious. Neuro-evolution method can achieve lower RMSE and faster convergence for semiconductor device compact models compared to traditional MLP models.
Artificial neural networks (ANN) and multilayer perceptrons (MLP) have proved to be efficient in terms of designing highly accurate semiconductor device compact models (CM). Their ability to update their weight and biases through the backpropagation method makes them highly useful in learning the task. To improve the learning, MLP usually requires large networks and thus a large number of model parameters, which significantly increases the simulation time in circuit simulation. Hence, optimizing the network architecture and topology is always a tedious yet important task. In this work, we tune the network topology using neuro-evolution (NE) to develop semiconductor device CMs. With input and output layers defined, we have allowed a genetic algorithm (GA), a gradient-free algorithm, to tune the network architecture in combination with Adam, a gradient-based backpropagation algorithm, for the network weight and bias optimization. In addition, we implemented the MLP model using a similar number of parameters as the baseline for comparison. It is observed that in most of the cases, the NE models exhibit a lower root mean square error (RMSE) and require fewer training epochs compared to the MLP baseline models. For instance, for patience number 100 with different number of model parameters, the RMSE for test dataset using NE and MLP in unit of log(ampere) are 0.1461, 0.0985, 0.1274, 0.0971, 0.0705, and 0.2254, 0.1423, 0.1429, 0.1425, 0.1391, respectively, for the 28nm technology node at foundry. The code is available at Github.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据