Journal
2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022)
Volume -, Issue -, Pages 1920-1923Publisher
IEEE
DOI: 10.1109/IGARSS46834.2022.9884661
Keywords
Video classification; Out-of-distribution detection; Vision transformers
Categories
Ask authors/readers for more resources
This article introduces a model for event recognition in UAV aerial videos. The model uses attention-based mechanisms to extract spatiotemporal features from the videos and is trained with IsoMax loss to detect out-of-distribution videos. Experimental results show that the model can accurately detect non-event videos and improve the classification accuracy of known events.
Most of the classification models are built for closed set environments, where the model is trained to assign samples to a set of predefined categories. This assumption cannot be hold for models built for UAV aerial videos, where novel videos are likely to be encountered in the test phase. Dealing with unknown videos is fundamental for a reliable classification model. Therefore, in this work, we propose a model for recognizing events acquired using UAV platforms with the non-event detection property. Our model utilized the power of the attention-based model to extract discriminative spatiotemporal features from the video clips. Then, the model is trained with IsoMax loss to detect out-of-distribution videos. The proposed model is evaluated on UAV Events Recognition Dataset (ERA), and the results show that our model is able to detect nonevent videos with 70.87% precision. Moreover, non-event detection has increased the accuracy of recognizing known events to 68.44%, which outperforms the accuracy of other state-of-the-art models.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available