4.5 Article

PA-Cache: Evolving Learning-Based Popularity- Aware Content Caching in Edge Networks

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNSM.2021.3053645

关键词

Heuristic algorithms; Predictive models; Prediction algorithms; Databases; Feature extraction; Computational modeling; Training; Edge caching; popularity prediction; deep learning; quality of service

资金

  1. Major Special Program for Technical Innovation & Application Development of Chongqing Science & Technology Commission [CSTC 2019jscx-zdztzxX0031]
  2. National NSFC [61902044, 62072060]
  3. Fundamental Research Funds for the Central Universities [2020CDJQY-A001, 2020CDJQY-A022, 2018CDXYRJ0030]
  4. National Key R&D Program of China [2018YFF0214700, 2018YFB2100100]
  5. Chongqing Research Program of Basic Research and Frontier Technology [cstc2019jcyj-msxmX0589, cstc2018jcyjAX0340]
  6. Key Research Program of Chongqing Science & Technology Commission [CSTC2017jcyjBX0025, CSTC2019jscx-zdztzxX0031]
  7. Double First-Class Scientific Research Funds of HIT [IDGA1010200107]

向作者/读者索取更多资源

As the traffic generated by mobile devices continues to increase, the importance of content caching at network edges grows. This study introduces a learning-based evolving content caching policy, PA-Cache, which adaptively learns content popularity and trains neural networks gradually, outperforming existing caching algorithms and reducing computational costs.
As ubiquitous and personalized services are growing boomingly, an increasingly large amount of traffic is generated over the network by massive mobile devices. As a result, content caching is gradually extending to network edges to provide low-latency services, improve quality of service, and reduce redundant data traffic. Compared to the conventional content delivery networks, caches in edge networks with smaller sizes usually have to accommodate more bursty requests. In this article, we propose an evolving learning-based content caching policy, named PA-Cache in edge networks. It adaptively learns time-varying content popularity and determines which contents should be replaced when the cache is full. Unlike conventional deep neural networks (DNNs), which learn a fine-tuned but possibly outdated or biased prediction model using the entire training dataset with high computational complexity, PA-Cache weighs a large set of content features and trains the multi-layer recurrent neural network from shallow to deeper when more requests arrive over time. We extensively evaluate the performance of our proposed PA-Cache on real-world traces from a large online video-on-demand service provider. The results show that PA-Cache outperforms existing popular caching algorithms and approximates the optimal algorithm with only a 3.8% performance gap when the cache percentage is 1.0%. PA-Cache also significantly reduces the computational cost compared to conventional DNN-based approaches.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据