4.7 Article

Deep Relative Tracking

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 26, 期 4, 页码 1845-1858

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2017.2656628

关键词

Visual tracking; deep learning; relative model

资金

  1. National Natural Science Foundation of China [61225009, 61432019, 61572498, 61532009, 61572296]
  2. Importation and Development of High-Caliber Talents Project of Beijing Municipal Institutions [IDHT20140224]

向作者/读者索取更多资源

Most existing tracking methods are direct trackers, which directly exploit foreground or/and background information for object appearance modeling and decide whether an image patch is target object or not. As a result, these trackers cannot perform well when target appearance changes heavily and becomes different from its model. To deal with this issue, we propose a novel relative tracker, which can effectively exploit the relative relationship among image patches from both foreground and background for object appearance modeling. Different from direct trackers, the proposed relative tracker is robust to localize target object by use of the best image patch with the highest relative score to the target appearance model. To model relative relationship among large-scale image patch pairs, we propose a novel and effective deep relative learning algorithm through the convolutional neural network. We test the proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that our method consistently outperforms the state-of-theart trackers due to the powerful capacity of the proposed deep relative model.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据