4.7 Article

Joint Physical-Layer and System-Level Power Management for Delay-Sensitive Wireless Communications

期刊

IEEE TRANSACTIONS ON MOBILE COMPUTING
卷 12, 期 4, 页码 694-709

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TMC.2012.36

关键词

Energy-efficient wireless communications; dynamic power management; power-control; adaptive modulation and coding; Markov decision process; reinforcement learning

资金

  1. Sanyo, Japan
  2. US National Science Foundation [0831549]
  3. Division Of Computer and Network Systems
  4. Direct For Computer & Info Scie & Enginr [0831549] Funding Source: National Science Foundation

向作者/读者索取更多资源

We consider the problem of energy-efficient point-to-point transmission of delay-sensitive data (e. g., multimedia data) over a fading channel. Existing research on this topic utilizes either physical-layer centric solutions, namely power-control and adaptive modulation and coding (AMC), or system-level solutions based on dynamic power management (DPM); however, there is currently no rigorous and unified framework for simultaneously utilizing both physical-layer centric and system-level techniques to achieve the minimum possible energy consumption, under delay constraints, in the presence of stochastic and a priori unknown traffic and channel conditions. In this paper, we propose such a framework. We formulate the stochastic optimization problem as a Markov decision process (MDP) and solve it online using reinforcement learning (RL). The advantages of the proposed online method are that 1) it does not require a priori knowledge of the traffic arrival and channel statistics to determine the jointly optimal power-control, AMC, and DPM policies; 2) it exploits partial information about the system so that less information needs to be learned than when using conventional reinforcement learning algorithms; and 3) it obviates the need for action exploration, which severely limits the adaptation speed and runtime performance of conventional reinforcement learning algorithms. Our results show that the proposed learning algorithms can converge up to two orders of magnitude faster than a state-of-the-art learning algorithm for physical layer power-control and up to three orders of magnitude faster than conventional reinforcement learning algorithms.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据