4.7 Article

Coded caching design for fog-aided networks

期刊

COMPUTER NETWORKS
卷 196, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.comnet.2021.108237

关键词

Fog computing; Coded caching; Centralized caching; Decentralized caching; Parallel delivery; Successive delivery

资金

  1. Key Industry Innovation Chain Project of Shaanxi [2020ZDLGY05-04, 2021ZDLGY05-03]
  2. National Natural Science Foundation of China [61671340]
  3. 111 Project [B08038]
  4. Collaborative Innovation Center of Information Sensing and Understanding at Xidian University

向作者/读者索取更多资源

This study introduces novel cache placement and delivery schemes for fog-aided networks, utilizing coding techniques to reduce transmission load and alleviate network congestion. The impact of cache memories on transmission time is analyzed, showing that centralized schemes outperform decentralized ones for both parallel and successive delivery. Increasing memories of relays and users lead to decreased transmission load, and when total memory can store the file library, the need for a server in delivery phase is eliminated.
Coded caching for fog-aided networks is an inspiring technology in the next-generation wireless networks. We study the cache placement and delivery problems for fog networks which can be considered as the two-hop networks. A novel centralized caching scheme and a novel decentralized caching scheme are proposed for two-hop networks of one server, multiple relays and users. Based on the file splitting method and the maximum distance separable (MDS) codes, the coded placement and delivery procedures for two-hop networks are designed. Numerical evaluations show that the proposed schemes can decrease the transmission load and alleviate network congestion. The impact of the cache memories to the transmission time for the parallel delivery and successive delivery is analyzed in detail. We prove that the centralized scheme always outperforms the decentralized scheme for both the parallel delivery and the successive delivery. The parallel delivery is much more effective than successive delivery for centralized as well as decentralized caching schemes considering the delivery latency. With the increase in memories of relays and users, the transmission load decreases gradually. Moreover, when the total memory of the user and its connected relays is able to store the file library, there is no need for the server in the delivery phase.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据