4.7 Article

Attention-Weighted Federated Deep Reinforcement Learning for Device-to-Device Assisted Heterogeneous Collaborative Edge Caching

Journal

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
Volume 39, Issue 1, Pages 154-169

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSAC.2020.3036946

Keywords

Device-to-device communication; Data models; Collaboration; Servers; Delays; Computational modeling; Training; Edge caching; device to device; attention-weighted federated learning; deep reinforcement learning

Funding

  1. National Key Research and Development Program of China [2019YFB2101901, 2018YFC0809803, 2018YFF0214700]
  2. National NSFC [62072332, 61902044, 62002260, 61672117, 62072060]
  3. Chongqing Research Program of Basic Research and Frontier Technology [cstc2019jcyj-msxmX0589]
  4. Fundamental Research Funds for the Central Universities [2020CDJQY-A022]
  5. Academy of Finland Project CSN [311654]
  6. Chinese National Engineering Laboratory for Big Data System Computing Technology at Shenzhen University
  7. Canadian Natural Sciences and Engineering Research Council
  8. 6Genesis Project [318927]

Ask authors/readers for more resources

This study proposes a D2D-assisted heterogeneous collaborative edge caching framework that optimizes node selection and cache replacement in mobile networks through flexible trilateral cooperation, using deep Q-learning network and attention-weighted federated deep reinforcement learning model. The effectiveness in reducing delay, improving hit rate, and offloading traffic is demonstrated, with proven convergence of the algorithm.
In order to meet the growing demands for multimedia service access and release the pressure of the core network, edge caching and device-to-device (D2D) communication have been regarded as two promising techniques in next generation mobile networks and beyond. However, most existing related studies lack consideration of effective cooperation and adaptability to the dynamic network environments. In this article, based on the flexible trilateral cooperation among user equipment, edge base stations and a cloud server, we propose a D2D-assisted heterogeneous collaborative edge caching framework by jointly optimizing the node selection and cache replacement in mobile networks. We formulate the joint optimization problem as a Markov decision process, and use a deep Q-learning network to solve the long-term mixed integer linear programming problem. We further design an attention-weighted federated deep reinforcement learning (AWFDRL) model that uses federated learning to improve the training efficiency of the Q-learning network by considering the limited computing and storage capacity, and incorporates an attention mechanism to optimize the aggregation weights to avoid the imbalance of local model quality. We prove the convergence of the corresponding algorithm, and present simulation results to show the effectiveness of the proposed AWFDRL framework in reducing average delay of content access, improving hit rate and offloading traffic.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available