4.7 Article

UAV-Enabled Secure Communications by Multi-Agent Deep Reinforcement Learning

Journal

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
Volume 69, Issue 10, Pages 11599-11611

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TVT.2020.3014788

Keywords

UAV; multi-agent deep reinforcement learning; trajectory design; policy gradient; physical layer security

Funding

  1. National Key Research and Development Program of China [2018AAA0102401]
  2. National Natural Science Foundation of China [61831013, 61771274, 61531011, 61871321]
  3. Beijing Municipal Natural Science Foundation [4182030, L182042]
  4. US NSF [EARS-1839818, CNS1717454, CNS-1731424, CNS-1702850]

Ask authors/readers for more resources

Unmanned aerial vehicles (UAVs) can be employed as aerial base stations to support communication for the ground users (GUs). However, the aerial-to-ground (A2G) channel link is dominated by line-of-sight (LoS) due to the high flying altitude, which is easily wiretapped by the ground eavesdroppers (GEs). In this case, a single UAV has limited maneuvering capacity to obtain the desired secure rate in the presence of multiple eavesdroppers. In this paper, we propose a cooperative jamming approach by letting UAV jammers help the UAV transmitter defend against GEs. To be specific, the UAV transmitter sends the confidential information to GUs, and the UAV jammers send the artificial noise signals to the GEs by 3D beamforming. We propose a multi-agent deep reinforcement learning (MADRL) approach, i.e., multi-agent deep deterministic policy gradient (MADDPG) to maximize the secure capacity by jointly optimizing the trajectory of UAVs, the transmit power from UAV transmitter and the jamming power from the UAV jammers. The MADDPG algorithm adopts centralized training and distributed execution. The simulation results show the MADRL method can realize the joint trajectory design of UAVs and achieve good performance. To improve the learning efficiency and convergence, we further propose a continuous action attention MADDPG (CAA-MADDPG) method, where the agent learns to pay attention to the actions and observations of other agents that are more relevant with it. From the simulation results, the rewards performance of CAA-MADDPG is better than the MADDPG without attention.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available