4.7 Article

Caching Placement Optimization in UAV-Assisted Cellular Networks: A Deep Reinforcement Learning-Based Framework

期刊

IEEE WIRELESS COMMUNICATIONS LETTERS
卷 12, 期 8, 页码 1359-1363

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LWC.2023.3274535

关键词

Caching placement; timeliness; proximal policy optimization; unmanned aerial vehicle

向作者/读者索取更多资源

In this study, the caching placement problem of UAVs for enhancing service timeliness is investigated. A modified timeliness model called effective age of information (EAoI) is proposed to evaluate service timeliness comprehensively. Proximal policy optimization (PPO) algorithm is employed to build a deep reinforcement learning framework for adaptively finding the optimal caching strategy. Extensive simulation results demonstrate the superiority of the proposed scheme compared to conventional schemes.
Capable of delivering contents offloaded from the base station (BS) to users, unmanned aerial vehicle (UAV) has emerged as a crucial leverage to compensate for terrestrial BSs-based communication. However, the limited storage capacity of the UAV brings challenges to providing low-latency services for users. In this letter, we investigate the caching placement of the UAV for enhancing the timeliness of services. To overcome the unknown content popularity, proximal policy optimization (PPO) is adopted in the proposed algorithm. To be specific, we first propose a modified timeliness model, named effective age of information (EAoI), to comprehensively evaluate the timeliness of services. Then, we employ PPO to build a deep reinforcement learning framework for finding the optimal caching strategy adaptively. Extensive simulation results are provided to verify the superiority of the proposed scheme, in comparison with the conventional schemes.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据