4.6 Article

Video Global Motion Compensation Based on Affine Inverse Transform Model

期刊

SENSORS
卷 23, 期 18, 页码 -

出版社

MDPI
DOI: 10.3390/s23187750

关键词

image processing; global motion compensation; feature point matching; affine transformation; target detection

向作者/读者索取更多资源

This paper investigates the influence of global motion on object detection in video sequences and proposes a method to estimate and compensate for global motion. Experimental results show that the proposed method can accurately compensate for complex global motion in video sequences.
Global motion greatly increases the number of false alarms for object detection in video sequences against dynamic backgrounds. Therefore, before detecting the target in the dynamic background, it is necessary to estimate and compensate the global motion to eliminate the influence of the global motion. In this paper, we use the SURF (speeded up robust features) algorithm combined with the MSAC (M-Estimate Sample Consensus) algorithm to process the video. The global motion of a video sequence is estimated according to the feature point matching pairs of adjacent frames of the video sequence and the global motion parameters of the video sequence under the dynamic background. On this basis, we propose an inverse transformation model of affine transformation, which acts on each adjacent frame of the video sequence in turn. The model compensates the global motion, and outputs a video sequence after global motion compensation from a specific view for object detection. Experimental results show that the algorithm proposed in this paper can accurately perform motion compensation on video sequences containing complex global motion, and the compensated video sequences achieve higher peak signal-to-noise ratio and better visual effects.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据