4.7 Article

Cooperative Data Collection With Multiple UAVs for Information Freshness in the Internet of Things

Journal

IEEE TRANSACTIONS ON COMMUNICATIONS
Volume 71, Issue 5, Pages 2740-2755

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCOMM.2023.3255240

Keywords

Trajectory; Data collection; Autonomous aerial vehicles; Internet of Things; Energy consumption; Scheduling; Training; Age of information; deep reinforcement learning; unmanned aerial vehicle

Ask authors/readers for more resources

In this paper, we study cooperative data collection using multiple Unmanned Aerial Vehicles (UAVs) to minimize the total average Age of Information. We optimize the data collection process considering various constraints of the UAVs and propose a multi-agent Deep Reinforcement Learning (DRL)-based algorithm to address the challenges of the problem. Simulation results demonstrate the effectiveness of the proposed algorithms in reducing the total average AoI and improving convergence speed.
Maintaining the freshness of information in the Internet of Things (IoT) is a critical yet challenging problem. In this paper, we study cooperative data collection using multiple Unmanned Aerial Vehicles (UAVs) with the objective of minimizing the total average Age of Information (AoI). We consider various constraints of the UAVs, including kinematic, energy, trajectory, and collision avoidance, in order to optimize the data collection process. Specifically, each UAV, which has limited on-board energy, takes off from its initial location and flies over sensor nodes to collect update packets in cooperation with the other UAVs. The UAVs must land at their final destinations with non-negative residual energy after the specified time duration to ensure they have enough energy to complete their missions. It is crucial to design the trajectories of the UAVs and the transmission scheduling of the sensor nodes to enhance information freshness. We model the multi-UAV data collection problem as a Decentralized Partially Observable Markov Decision Process (Dec-POMDP), as each UAV is unaware of the dynamics of the environment and can only observe a part of the sensors. To address the challenges of this problem, we propose a multi-agent Deep Reinforcement Learning (DRL)-based algorithm with centralized learning and decentralized execution. In addition to the reward shaping, we use action masks to filter out invalid actions and ensure that the constraints are met. Simulation results demonstrate that the proposed algorithms can significantly reduce the total average AoI compared to the baseline algorithms, and the use of the action mask method can improve the convergence speed of the proposed algorithm.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available