期刊
COMPUTER VISION, ECCV 2022, PT XXII
卷 13682, 期 -, 页码 644-663出版社
SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-20047-2_37
关键词
Object tracking
We introduce FEAR, a family of efficient Siamese visual trackers that achieve high accuracy and robustness. By incorporating dual-template representation and pixel-wise fusion block, FEAR trackers outperform most Siamese trackers in terms of accuracy and efficiency. The optimized version, FEAR-XS, offers significantly faster tracking while maintaining near state-of-the-art results.
We present FEAR, a family of fast, efficient, accurate, and robust Siamese visual trackers. We present a novel and efficient way to benefit from dual-template representation for object model adaption, which incorporates temporal information with only a single learnable parameter. We further improve the tracker architecture with a pixel-wise fusion block. By plugging-in sophisticated backbones with the abovementioned modules, FEAR-M and FEAR-L trackers surpass most Siamese trackers on several academic benchmarks in both accuracy and efficiency. Employed with the lightweight backbone, the optimized version FEAR-XS offers more than 10 times faster tracking than current Siamese trackers while maintaining near state-of-the-art results. FEAR-XS tracker is 2.4x smaller and 4.3x faster than LightTrack with superior accuracy. In addition, we expand the definition of the model efficiency by introducing FEAR benchmark that assesses energy consumption and execution speed. We show that energy consumption is a limiting factor for trackers on mobile devices. Source code, pretrained models, and evaluation protocol are available at https://github.com/PinataFarms/FEARTracker.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据