4.2 Article

Achieving Autonomous Power Management Using Reinforcement Learning

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/2442087.2442095

关键词

Design; Experimentation; Management; Performance; Power management; thermal management; machine learning; computer

资金

  1. NSF [CNS-0845947]
  2. Division Of Computer and Network Systems
  3. Direct For Computer & Info Scie & Enginr [1203986] Funding Source: National Science Foundation

向作者/读者索取更多资源

System level power management must consider the uncertainty and variability that come from the environment, the application and the hardware. A robust power management technique must be able to learn the optimal decision from past events and improve itself as the environment changes. This article presents a novel on-line power management technique based on model-free constrained reinforcement learning (Q-learning). The proposed learning algorithm requires no prior information of the workload and dynamically adapts to the environment to achieve autonomous power management. We focus on the power management of the peripheral device and the microprocessor, two of the basic components of a computer. Due to their different operating behaviors and performance considerations, these two types of devices require different designs of Q-learning agent. The article discusses system modeling and cost function construction for both types of Q-learning agent. Enhancement techniques are also proposed to speed up the convergence and better maintain the required performance (or power) constraint in a dynamic system with large variations. Compared with the existing machine learning based power management techniques, the Q-learning based power management is more flexible in adapting to different workload and hardware and provides a wider range of power-performance tradeoff.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.2
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据