Journal
COMPUTER VISION, ECCV 2022, PT XXII
Volume 13682, Issue -, Pages 644-663Publisher
SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-20047-2_37
Keywords
Object tracking
Ask authors/readers for more resources
We introduce FEAR, a family of efficient Siamese visual trackers that achieve high accuracy and robustness. By incorporating dual-template representation and pixel-wise fusion block, FEAR trackers outperform most Siamese trackers in terms of accuracy and efficiency. The optimized version, FEAR-XS, offers significantly faster tracking while maintaining near state-of-the-art results.
We present FEAR, a family of fast, efficient, accurate, and robust Siamese visual trackers. We present a novel and efficient way to benefit from dual-template representation for object model adaption, which incorporates temporal information with only a single learnable parameter. We further improve the tracker architecture with a pixel-wise fusion block. By plugging-in sophisticated backbones with the abovementioned modules, FEAR-M and FEAR-L trackers surpass most Siamese trackers on several academic benchmarks in both accuracy and efficiency. Employed with the lightweight backbone, the optimized version FEAR-XS offers more than 10 times faster tracking than current Siamese trackers while maintaining near state-of-the-art results. FEAR-XS tracker is 2.4x smaller and 4.3x faster than LightTrack with superior accuracy. In addition, we expand the definition of the model efficiency by introducing FEAR benchmark that assesses energy consumption and execution speed. We show that energy consumption is a limiting factor for trackers on mobile devices. Source code, pretrained models, and evaluation protocol are available at https://github.com/PinataFarms/FEARTracker.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available