4.6 Article

Fast and Accurate Visual Tracking with Group Convolution and Pixel-Level Correlation

期刊

APPLIED SCIENCES-BASEL
卷 13, 期 17, 页码 -

出版社

MDPI
DOI: 10.3390/app13179746

关键词

feature fusion; pixel-level correlation; Siamese network; attention mechanism

向作者/读者索取更多资源

In this study, a fast and accurate visual object tracking method based on Siamese networks is proposed, which incorporates multi-layer feature information and pixel-level correlation operation to enhance the feature extraction capability of the network. The algorithm improves the precision and success rates on various datasets and performs better in complex scenes such as occlusion, illumination changes, and fast-motion situations.
Visual object trackers based on Siamese networks perform well in visual object tracking (VOT); however, degradation of the tracking accuracy occurs when the target has fast motion, large-scale changes, and occlusion. In this study, in order to solve this problem and enhance the inference speed of the tracker, fast and accurate visual tracking with a group convolution and pixel-level correlation based on a Siamese network is proposed. The algorithm incorporates multi-layer feature information on the basis of Siamese networks. We designed a multi-scale feature aggregated channel attention block (MCA) and a global-to-local-information-fused spatial attention block (GSA), which enhance the feature extraction capability of the network. The use of a pixel-level mutual correlation operation in the network to match the search region with the template region refines the bounding box and reduces background interference. Comparing our work with the latest algorithms, the precision and success rates on the UAV123, OTB100, LaSOT, and GOT10K datasets were improved, and our tracker was able to run at 40FPS, with a better performance in complex scenes such as those with occlusion, illumination changes, and fast-motion situations.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据