4.5 Article

AGCosPlace: A UAV Visual Positioning Algorithm Based on Transformer

Journal

DRONES
Volume 7, Issue 8, Pages -

Publisher

MDPI
DOI: 10.3390/drones7080498

Keywords

UAV visual navigation; visual positioning; graph network; transformer

Categories

Ask authors/readers for more resources

This paper proposes a visual positioning algorithm called AGCosPlace based on image retrieval, which leverages the Transformer architecture to improve performance and overcome the limitations of unknown relative poses and intrinsics of drone cameras. The algorithm involves encoding the feature map of the backbone with attention mechanisms, multi-layer perceptron coding, and a graph network module for better contextual information aggregation. Experimental results demonstrate that the proposed algorithm achieves notable improvements in four evaluation metrics and can effectively be used for UAV visual positioning tasks.
To address the limitation and obtain the position of the drone even when the relative poses and intrinsics of the drone camera are unknown, a visual positioning algorithm based on image retrieval called AGCosPlace, which leverages the Transformer architecture to achieve improved performance, is proposed. Our approach involves subjecting the feature map of the backbone to an encoding operation that incorporates attention mechanisms, multi-layer perceptron coding, and a graph network module. This encoding operation allows for better aggregation of the context information present in the image. Subsequently, the aggregation module with dynamic adaptive pooling produces a descriptor with an appropriate dimensionality, which is then passed into the classifier to recognize the position. Considering the complexity associated with labeling visual positioning labels for UAV images, the visual positioning network is trained using the publicly available Google Street View SF-XL dataset. The performance of the trained network model on a custom UAV perspective test set is evaluated. The experimental results demonstrate that our proposed algorithm, which improves upon the ResNet backbone networks on the SF-XL test set, exhibits excellent performance on the UAV test set. The algorithm achieves notable improvements in the four evaluation metrics: R@1, R@5, R@10, and R@20. These results confirm that the trained visual positioning network can effectively be employed in UAV visual positioning tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available