4.6 Article

Dynamic Content Update for Wireless Edge Caching via Deep Reinforcement Learning

期刊

IEEE COMMUNICATIONS LETTERS
卷 23, 期 10, 页码 1773-1777

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LCOMM.2019.2931688

关键词

Content update; Markov decision process; deep reinforcement learning; cache hit rate; long-term reward

资金

  1. National Key RD Program [2018YFB1004800]
  2. National Natural Science Foundation of China [61872184, 61727802]
  3. Singapore Ministry of Education Academic Research Fund Tier 2 [MOE2016-T2-2-054]
  4. SUTD-ZJU Grant [ZJURP1500102]

向作者/读者索取更多资源

This letter studies a basic wireless caching network, where a source server is connected to a cache-enabled base station (BS) that serves multiple requesting users. A critical problem is how to improve cache hit rate under dynamic content popularity. To solve this problem, the primary contribution of this letter is to develop a novel dynamic content update strategy with the aid of deep reinforcement learning. Considering that the BS is unaware of content popularities, the proposed strategy dynamically updates the BS cache according to the time-varying requests and the BS cached contents. Toward this end, we model the problem of cache update as a Markov decision process and put forth an efficient algorithm that builds upon the long short-term memory network and external memory to enhance the decision making ability of the BS. Simulation results show that the proposed algorithm can achieve not only a higher average reward than deep Q-network but also a higher cache hit rate than the existing replacement policies, such as the least recently used, first-in first-out, and deep Q-network-based algorithms.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据