4.2 Article Proceedings Paper

Training PPA Models for Embedded Memories on a Low-data Diet

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3556539

关键词

Electronic design automation; machine learning; memory compilers; regression; artificial neural networks; transfer learning; few-shot learning; deep learning

向作者/读者索取更多资源

Supervised machine learning requires large amounts of labeled data for training. In the context of power, performance, and area estimation for embedded memories, transfer learning can be used to reduce the need for supervised data by pre-training neural networks on related domains. This study shows that exploiting similarities among different memory compilers, versions, and technology nodes can significantly reduce the provisioning times for new compiler versions, speeding up the design cycle. Additionally, a new method called domain equalization is proposed to enable transfer learning across structurally different domains.
Supervised machine learning requires large amounts of labeled data for training. In power, performance, and area (PPA) estimation of embedded memories, every new memory compiler version is considered independently of previous compiler versions. Since the data of different memory compilers originate from similar domains, transfer learning may reduce the amount of supervised data required by pre-training PPA estimation neural networks on related domains. We show that provisioning times of PPA models for new compiler versions can be reduced significantly by exploiting similarities among different compilers, versions, and technology nodes. Through transfer learning, we shorten the time to provision PPA models for new compiler versions, which speeds up time-critical periods of the design cycle. Using only 901 training samples (10%) is sufficient to achieve an almost worst-case (98th percentile) estimation error of 2.67% and allows us to shorten model provisioning times from 40 days to less than one week without sacrificing accuracy. To enable a diverse set of source domains for transfer learning, we devise a new, application-independent method for overcoming structural domain differences through domain equalization that attains competitive results when compared to domain-free transfer. A high degree of automation necessitates the efficient assessment of the best source domains. We propose using various metrics to accurately identify four of the five best among 45 datasets with low computational effort.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.2
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据