4.5 Article

Anchor-Free Tracker Based on Space-Time Memory Network

期刊

IEEE MULTIMEDIA
卷 30, 期 1, 页码 73-83

出版社

IEEE COMPUTER SOC
DOI: 10.1109/MMUL.2022.3207016

关键词

Feature extraction; Transformers; Memory management; Video sequences; Object tracking; Visualization; Data mining; Space-time memory network; Feature cross fusion; Anchor-free

向作者/读者索取更多资源

This article proposes a new Anchor-free Tracker based on Space-time Memory Network (ATSMN) to solve the appearance problems in object tracking. By utilizing space-time memory network, memory feature fusion network, and transformer feature cross fusion network, the tracker can effectively use temporal context information and better adapt to appearance changes, achieving accurate classification and regression results. Extensive experimental results show that ATSMN outperforms other advanced trackers on challenging benchmarks.
In the visual object tracking task, the existing trackers cannot well solve the appearance of deformation, occlusion, and similar object interference, etc. To address these problems, this article proposes a new Anchor-free Tracker based on Space-time Memory Network (ATSMN). In this work, we innovatively use the space-time memory network, memory feature fusion network, and transformer feature cross fusion network. Through the synergy of above-mentioned innovations, trackers can make full use of temporal context information in the memory frames related to the object and better adapt to the appearance change of the object, which can obtain accurate classification and regression results. Extensive experimental results on challenging benchmarks show that ATSMN can achieve the SOTA level tracking performance compared with other advanced trackers.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据