4.1 Article

Distributed Learning-Based Resource Allocation for Self-Organizing C-V2X Communication in Cellular Networks

Journal

IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY
Volume 3, Issue -, Pages 1719-1736

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/OJCOMS.2022.3211340

Keywords

Resource management; Device-to-device communication; Q-learning; Games; Interference; Learning systems; Uplink; Cellular vehicle-to-everything (C-V2X) communication; PD-NOMA; resource allocation; learning algorithm

Ask authors/readers for more resources

This paper investigates a resource allocation problem in a C-V2X network to improve energy efficiency. Self-organizing mechanisms and a multi-agent Q-learning algorithm are proposed for joint and disjoint subcarrier and power allocation in a distributed manner. Simulation results show significant performance gains of the multi-agent joint Q-learning algorithm in terms of energy efficiency.
In this paper, we investigate a resource allocation problem for a Cellular Vehicle to Everything (C-V2X) network to improve energy efficiency of the system. To address this problem, self-organizing mechanisms are proposed for joint and disjoint subcarrier and power allocation procedures which are performed in a fully distributed manner. A multi-agent Q-learning algorithm is proposed for the joint power and subcarrier allocation. In addition, for the sake of simplicity, it is decoupled into two sub-problems: a subcarrier allocation sub-problem and a power allocation sub-problem. First, to allocate the subcarrier among users, a distributed Q-learning method is proposed. Then, given the optimal subcarriers, a dynamic power allocation mechanism is proposed where the problem is modeled as a non-cooperative game. To solve the problem, a no-regret learning algorithm is utilized. To evaluate the performance of the proposed approaches, other learning mechanisms are used which are presented in Fig. 8. Simulation results show the multi-agent joint Q-learning algorithm yields significant performance gains of up to about 11% and 18% in terms of energy efficiency compared to proposed disjoint mechanism and the third disjoint Q-learning mechanism for allocating the power and subcarrier to each user; however, the multi-agent joint Q-learning algorithm uses more memory than disjoint methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available