4.2 Article Proceedings Paper

Training PPA Models for Embedded Memories on a Low-data Diet

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3556539

Keywords

Electronic design automation; machine learning; memory compilers; regression; artificial neural networks; transfer learning; few-shot learning; deep learning

Ask authors/readers for more resources

Supervised machine learning requires large amounts of labeled data for training. In the context of power, performance, and area estimation for embedded memories, transfer learning can be used to reduce the need for supervised data by pre-training neural networks on related domains. This study shows that exploiting similarities among different memory compilers, versions, and technology nodes can significantly reduce the provisioning times for new compiler versions, speeding up the design cycle. Additionally, a new method called domain equalization is proposed to enable transfer learning across structurally different domains.
Supervised machine learning requires large amounts of labeled data for training. In power, performance, and area (PPA) estimation of embedded memories, every new memory compiler version is considered independently of previous compiler versions. Since the data of different memory compilers originate from similar domains, transfer learning may reduce the amount of supervised data required by pre-training PPA estimation neural networks on related domains. We show that provisioning times of PPA models for new compiler versions can be reduced significantly by exploiting similarities among different compilers, versions, and technology nodes. Through transfer learning, we shorten the time to provision PPA models for new compiler versions, which speeds up time-critical periods of the design cycle. Using only 901 training samples (10%) is sufficient to achieve an almost worst-case (98th percentile) estimation error of 2.67% and allows us to shorten model provisioning times from 40 days to less than one week without sacrificing accuracy. To enable a diverse set of source domains for transfer learning, we devise a new, application-independent method for overcoming structural domain differences through domain equalization that attains competitive results when compared to domain-free transfer. A high degree of automation necessitates the efficient assessment of the best source domains. We propose using various metrics to accurately identify four of the five best among 45 datasets with low computational effort.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available