4.6 Article

An Ensemble of Complementary Models for Deep Tracking

期刊

COGNITIVE COMPUTATION
卷 14, 期 3, 页码 1096-1106

出版社

SPRINGER
DOI: 10.1007/s12559-021-09864-3

关键词

Visual tracking; Convolutional neural network; Information fusion; Attention

资金

  1. Shenzhen Basic Research Program [JCYJ20170817155854115]
  2. National Natural Science Foundation of China [61976003]

向作者/读者索取更多资源

The study demonstrates that utilizing the complementary properties of different CNNs can improve visual tracking performance. By jointly inferring candidate location, predicted location, and confidence score, the importance of different CNNs is identified, and the adaptive fusion of prediction scores enhances tracking robustness.
Convolutional neural networks (CNNs) have shown favorable performance in recent tracking benchmark datasets. Some methods extract different levels of features based on pre-trained CNNs to deal with various challenging scenarios. Despite demonstrated successes for visual tracking, utilizing features from the same network might suffer from the suboptimal performance due to limitations of CNN architecture itself. We observe that different CNNs usually have complementary characteristics in representing target objects. Therefore, we propose to leverage the complementary properties of different CNNs for visual tracking in this paper. The importances of different CNNs are identified by a joint inference of candidate location, predicted location and confidence score. The prediction scores of all CNNs are adaptively fused to obtain robust tracking performance. Moreover, we introduce the attention mechanism to highlight discriminative features in each CNN. Experimental results on OTB2013 and OTB2015 datasets show that the proposed method performs favorably compared with some state-of-the-art methods. We conclude that combination of complementary models can better track objects in terms of accuracy and robustness.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据