4.1 Article

Distributed Learning-Based Resource Allocation for Self-Organizing C-V2X Communication in Cellular Networks

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/OJCOMS.2022.3211340

关键词

Resource management; Device-to-device communication; Q-learning; Games; Interference; Learning systems; Uplink; Cellular vehicle-to-everything (C-V2X) communication; PD-NOMA; resource allocation; learning algorithm

向作者/读者索取更多资源

This paper investigates a resource allocation problem in a C-V2X network to improve energy efficiency. Self-organizing mechanisms and a multi-agent Q-learning algorithm are proposed for joint and disjoint subcarrier and power allocation in a distributed manner. Simulation results show significant performance gains of the multi-agent joint Q-learning algorithm in terms of energy efficiency.
In this paper, we investigate a resource allocation problem for a Cellular Vehicle to Everything (C-V2X) network to improve energy efficiency of the system. To address this problem, self-organizing mechanisms are proposed for joint and disjoint subcarrier and power allocation procedures which are performed in a fully distributed manner. A multi-agent Q-learning algorithm is proposed for the joint power and subcarrier allocation. In addition, for the sake of simplicity, it is decoupled into two sub-problems: a subcarrier allocation sub-problem and a power allocation sub-problem. First, to allocate the subcarrier among users, a distributed Q-learning method is proposed. Then, given the optimal subcarriers, a dynamic power allocation mechanism is proposed where the problem is modeled as a non-cooperative game. To solve the problem, a no-regret learning algorithm is utilized. To evaluate the performance of the proposed approaches, other learning mechanisms are used which are presented in Fig. 8. Simulation results show the multi-agent joint Q-learning algorithm yields significant performance gains of up to about 11% and 18% in terms of energy efficiency compared to proposed disjoint mechanism and the third disjoint Q-learning mechanism for allocating the power and subcarrier to each user; however, the multi-agent joint Q-learning algorithm uses more memory than disjoint methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.1
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据