4.7 Article

Liquid State Machine Learning for Resource and Cache Management in LTE-U Unmanned Aerial Vehicle (UAV) Networks

期刊

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
卷 18, 期 3, 页码 1504-1517

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TWC.2019.2891629

关键词

Cache-enabled UAVs; LTE-U; resource allocation; machine learning; liquid state machine

资金

  1. National Natural Science Foundation of China [61671086, 61629101, 61871041]
  2. 111 Project [B17007]
  3. Shenzhen Fundamental Research Fund [KQTD2015033114415450, ZDSYS201707251409055, 2017ZT07X152]
  4. BUPT Excellent Ph.D.
  5. Students Foundation [CX2017309]
  6. U.S. National Science Foundation [CNS-1460316, IIS-1633363]

向作者/读者索取更多资源

In this paper, the problem of joint caching and resource allocation is investigated for a network of cache-enabled unmanned aerial vehicles (UAVs) that service wireless ground users over the LTE licensed and unlicensed bands. The considered model focuses on users that can access both licensed and unlicensed bands while receiving contents from either the cache units at the UAVs directly or via content server-UAV-user links. This problem is formulated as an optimization problem, which jointly incorporates user association, spectrum allocation, and content caching. To solve this problem, a distributed algorithm based on the machine learning framework of liquid state machine (LSM) is proposed. Using the proposed LSM algorithm, the cloud can predict the users' content request distribution while having only limited information on the network's and users' states. The proposed algorithm also enables the UAVs to autonomously choose the optimal resource allocation strategies that maximize the number of users with stable queues depending on the network states. Based on the users' association and content request distributions, the optimal contents that need to be cached at UAVs and the optimal resource allocation are derived. Simulation results using real datasets show that the proposed approach yields up to 17.8% and 57.1% gains, respectively, in terms of the number of users that have stable queues compared with two baseline algorithms: Q-learning with cache and Q-learning without cache. The results also show that the LSM significantly improves the convergence time of up to 20% compared with conventional learning algorithms such as Q-learning.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据