3.8 Proceedings Paper

Evaluating the Energy Efficiency of Deep Convolutional Neural Networks on CPUs and GPUs

出版社

IEEE
DOI: 10.1109/BDCloud-SocialCom-SustainCom.2016.76

关键词

energy-efficiency; neural networks; deep learning; GPUs

资金

  1. NSF [CNS-1216756, CCF-1452454, CNS-1305359]
  2. Nvidia Corporation
  3. Direct For Computer & Info Scie & Enginr
  4. Division Of Computer and Network Systems [1216756, 1305359] Funding Source: National Science Foundation
  5. Direct For Computer & Info Scie & Enginr
  6. Division of Computing and Communication Foundations [1452454] Funding Source: National Science Foundation

向作者/读者索取更多资源

In recent years convolutional neural networks (CNNs) have been successfully applied to various applications that are appropriate for deep learning, from image and video processing to speech recognition. The advancements in both hardware (e.g. more powerful GPUs) and software (e.g. deep learning models, open-source frameworks and supporting libraries) have significantly improved the accuracy and training time of CNNs. However, the high speed and accuracy are at the cost of energy consumption, which has been largely ignored in previous CNN design. With the size of data sets grows exponentially, the energy demand for training such data sets increases rapidly. It is highly desirable to design deep learning frameworks and algorithms that are both accurate and energy efficient. In this paper, we conduct a comprehensive study on the power behavior and energy efficiency of numerous well-known CNNs and training frameworks on CPUs and GPUs, and we provide a detailed workload characterization to facilitate the design of energy efficient deep learning solutions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据