4.5 Article

Research on user recruitment algorithms based on user trajectory prediction with sparse mobile crowd sensing

Journal

MATHEMATICAL BIOSCIENCES AND ENGINEERING
Volume 20, Issue 7, Pages 11998-12023

Publisher

AMER INST MATHEMATICAL SCIENCES-AIMS
DOI: 10.3934/mbe.2023533

Keywords

sparse mobile crowd sensing; STGCN-GRU; user selection; reinforcement learning

Ask authors/readers for more resources

Sparse mobile crowd sensing reduces perception cost by recruiting a small number of users to perceive data from a small number of sub-regions, and then inferring data from the remaining sub-regions. This paper proposes the STGCN-GRU user trajectory prediction algorithm and ADQN user recruitment algorithm, which improve the accuracy of user trajectory prediction and user recruitment, respectively. Experimental results show that the proposed STGCN-GRU algorithm outperforms other representative algorithms in terms of evaluation metrics FDE and ADE, while the ADQN user recruitment algorithm effectively improves the accuracy of data inference under budget constraints.
Sparse mobile crowd sensing saves perception cost by recruiting a small number of users to perceive data from a small number of sub-regions, and then inferring data from the remaining sub-regions. The data collected by different people on their respective trajectories have different values, and we can select participants who can collect high-value data based on their trajectory predictions. In this paper, we study two aspects of user trajectory prediction and user recruitment. First, we propose an STGCN-GRU user trajectory prediction algorithm, which uses the STGCN algorithm to extract features related to temporal and spatial information from the trajectory map, and then inputs the feature sequences into GRU for trajectory prediction, and this algorithm improves the accuracy of user trajectory prediction. Second, an ADQN (action DQN) user recruitment algorithm is proposed.The ADQN algorithm improves the objective function in DQN on the idea of reinforcement learning. The action with the maximum input value is found from the Q network, and then the output value of the objective function of the corresponding action Q network is found. This reduces the overestimation problem that occurs in Q networks and improves the accuracy of user recruitment. The experimental results show that the evaluation metrics FDE and ADE of the STGCN-GRU algorithm proposed in this paper are better than other representative algorithms. And the experiments on two real datasets verify the effectiveness of the ADQN user selection algorithm, which can effectively improve the accuracy of data inference under budget constraints.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available