Journal
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
Volume 69, Issue 4, Pages 4392-4402Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TVT.2020.2975849
Keywords
Fog radio access networks (F-RANs); deep reinforcement learning (DRL); low latency
Categories
Funding
- National Natural Science Foundation of China [61925101, 61831002]
- State Major Science and Technology Special Project [2018ZX03001023]
- Beijing Natural Science Foundation [JQ18016]
- National Program for Special Support of Eminent Professionals
Ask authors/readers for more resources
The growing demand for rich content services and developments of industrial internet of things and vehicle-to-everything communications pose challenging requirements for the next-generation fog radio access networks (F-RANs). Though F-RANs are promising to support these enabling technologies by leveraging edge caching and edge computing, delay performance is still straightforward and should be optimized. A latency optimization problem for F-RANs is formulated, and to solve the problem, a deep reinforcement learning (DRL) based joint proactive cache placement and power allocation strategy is proposed in this paper. Furthermore, to enhance the content serving capability at the edge, we rigorously consider that a set of F-RAN nodes cooperatively serve the content request. The user's demand can be adaptively satisfied either through fog access point mode at the network edge or by centralized cloud computing mode at the cloud tier. The key idea of the proposal is to learn the user's demand and make an intelligent decision for caching appropriate content and allocating a significant amount of power resources. Simulation results show the effectiveness and performance gains of the proposal under maintaining throughput compared with other baselines.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available