4.6 Article

Static Gesture Recognition Algorithm Based on Improved YOLOv5s

期刊

ELECTRONICS
卷 12, 期 3, 页码 -

出版社

MDPI
DOI: 10.3390/electronics12030596

关键词

YOLOv5s; gesture recognition; ASFF; CARAFE; bottleneck transformer

向作者/读者索取更多资源

With increasing government support, the VR/AR industry has developed rapidly. Gesture recognition, an important human-computer interaction method in this industry, faces challenges of low recognition accuracy and speed. To address these issues, an improved YOLOv5s algorithm is proposed, utilizing content-aware re-assembly of features (CARAFE) and adaptive spatial feature fusion (ASFF) to improve accuracy and efficiency. The introduced bottleneck transformer method reduces parameters while increasing accuracy and acceleration. The improved algorithm achieves 96.8% mAP and higher confidence in detection results compared to the original YOLOv5s algorithm.
With the government's increasing support for the virtual reality (VR)/augmented reality (AR) industry, it has developed rapidly in recent years. Gesture recognition, as an important human-computer interaction method in VR/AR technology, is widely used in the field of virtual reality. The current static gesture recognition technology has several shortcomings, such as low recognition accuracy and low recognition speed. A static gesture recognition algorithm based on improved YOLOv5s is proposed to address these issues. The content-aware re-assembly of features (CARAFE) is used to replace the nearest neighbor up-sampling method in YOLOv5s to make full use of the semantic information in the feature map and improve the recognition accuracy of the model for gesture regions. The adaptive spatial feature fusion (ASFF) method is introduced to filter out useless information and retain useful information for efficient feature fusion. The bottleneck transformer method is initially introduced into the gesture recognition task, reducing the number of model parameters and increasing the accuracy while accelerating the inference speed. The improved algorithm achieved an mAP(mean average precision) of 96.8%, a 3.1% improvement in average accuracy compared with the original YOLOv5s algorithm; the confidence level of the actual detection results was higher than the original algorithm.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据