4.6 Article

Asymmetric Decentralized Caching With Coded Prefetching Under Nonuniform Requests

Journal

IEEE SYSTEMS JOURNAL
Volume 16, Issue 1, Pages 197-208

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSYST.2021.3095190

Keywords

Prefetching; Servers; Wireless communication; STEM; Multicast communication; Memory management; Encoding; Asymmetric delivery; coded prefetching; decentralized caching; expected normalized rate; user grouping

Funding

  1. Singapore Ministry of Education Academic Research Fund Tier 2 [MOE2016-T2-2-054]
  2. Singapore University of Technology and Design Start-up Research [SRLS15095]
  3. Natural Science Foundation of Jiangsu Province [BK2021045477]
  4. National Natural Science Foundation of China [61872184]

Ask authors/readers for more resources

We investigate a basic decentralized caching network with coded prefetching under nonuniform requests and arbitrary file popularities. A new asymmetric delivery procedure is developed to address the problem in decentralized caching network with arbitrary MDS code rates and cache weights. The expected normalized rate induced by the asymmetric delivery is characterized using user grouping and leader set, leading to the derivation of rate-memory tradeoff and optimization of cache weights for a decentralized two-user and two-file caching network, which is supported by numerical results.
We investigate a basic decentralized caching network with coded prefetching under nonuniform requests and arbitrary file popularities, where a server containing $N$ files is connected to $K$ users, each with limited cache memory of $M$ files through a shared link. In the decentralized placement phase, the server encodes all files by the maximum distance separable (MDS) codes with different rates, and each user allocates different files with different cache weights, resulting in that each user randomly prefetches the coded subfiles with diverse sizes. In this context, the symmetric delivery in existing decentralized caching networks with coded prefetching cannot be directly applied. To address this problem, we develop an asymmetric delivery procedure for the decentralized caching network with arbitrary MDS code rates and cache weights. Furthermore, we characterize the expected normalized rate induced by the asymmetric delivery using the concept of user grouping and leader set. Following the proposed asymmetric delivery and rate analysis, we derive the exact rate-memory tradeoff for a decentralized two-user and two-file caching network and optimize cache weights to minimize the expected normalized rate. Finally, numerical results corroborate our analytical results in the two-user two-file caching scenario.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available