4.7 Article

STI-Net: Spatiotemporal integration network for video saliency detection

期刊

INFORMATION SCIENCES
卷 628, 期 -, 页码 134-147

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2023.01.106

关键词

Spatiotemporal saliency; Feature aggregation; Saliency prediction; Saliency fusion

向作者/读者索取更多资源

In recent years, significant progress has been made in image saliency detection, but little attention has been paid to video saliency detection. Existing video saliency models are prone to failure in challenging video scenarios, and using image saliency models directly for video saliency detection is inappropriate. To address these issues, this study proposes an end-to-end spatiotemporal integration network (STI-Net) that can effectively detect salient objects in videos. The proposed model explores spatial and temporal information comprehensively across the entire network, producing precise and complete characterization of salient objects and improving the quality of the final saliency map. Experimental results on challenging video datasets demonstrate the effectiveness of the proposed model, achieving comparable performance to state-of-the-art saliency models.
Image saliency detection, to which much effort has been devoted in recent years, has advanced significantly. In contrast, the community has paid little attention to video saliency detection. Especially, existing video saliency models are very likely to fail in videos with difficult scenarios such as fast motion, dynamic background, and nonrigid deformation. Furthermore, performing video saliency detection directly using image saliency models that ignore video temporal information is inappropriate. To alleviate this issue, this study proposes a novel end-to-end spatiotemporal integration network (STI-Net) for detecting salient objects in videos. Specifically, our method is made up of three key steps: feature aggregation, saliency prediction, and saliency fusion, which are used sequentially to generate spatiotemporal deep feature maps, coarse saliency predictions, and the final saliency map. The key advantage of our model lies in the comprehensive exploration of spatial and temporal information across the entire network, where the two features interact with each other in the feature aggregation step, are used to construct boundary cue in the saliency prediction step, and also serve as the original information in the saliency fusion step. As a result, the generated spatiotemporal deep feature maps can precisely and completely characterize the salient objects, and the coarse saliency predictions have well-defined boundaries, effectively improving the final saliency map's quality. Furthermore, shortcut connections are introduced into our model to make the proposed network easy to train and obtain accurate results when the network is deep. Extensive experimental results on two publicly available challenging video datasets demonstrate the effectiveness of the proposed model, which achieves comparable performance to state-of-the-art saliency models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据