3.8 Proceedings Paper

Exploiting non-conventional DVFS on GPUs: application to Deep Learning

出版社

IEEE COMPUTER SOC
DOI: 10.1109/SBAC-PAD49847.2020.00012

关键词

GPU; DVFS; Undervoltage

资金

  1. national funds through Fundacao para a Ciencia e a Tecnologia (FCT) [UIDB/CEC/50021/2020, PTDC/EEI-HAC/30485/2017, PCIF/MPG/0051/2018]
  2. Fundação para a Ciência e a Tecnologia [PCIF/MPG/0051/2018] Funding Source: FCT

向作者/读者索取更多资源

The use of Graphics Processing Units (GPUs) to accelerate Deep Neural Networks (DNNs) training and inference is already widely adopted, allowing for a significant increase in the performance of these applications. However, this increase in performance comes at the cost of a consequent increase in energy consumption. While several solutions have been proposed to perform Voltage-Frequency (V-F) scaling on GPUs, these are still one-dimensional, by simply adjusting frequency while relying on default voltage settings. To overcome this, this paper introduces a methodology to fully characterize the impact of non-conventional Dynamic Voltage and Frequency Scaling (DVFS) in GPUs. The proposed approach was applied to an AMD Vega 10 Frontier Edition GPU. When applying this non-conventional DVFS scheme to DNNs, the obtained results show that it is possible to safely decrease the GPU voltage, allowing for a significant reduction of the energy consumption (up to 38%) and the Energy-Delay Product (EDP) (up to 41%) on the training of CNN models, with no degradation of the networks accuracy.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据