4.7 Article

Saliency-Aware Convolution Neural Network for Ship Detection in Surveillance Video

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2019.2897980

关键词

Marine vehicles; Feature extraction; Real-time systems; Surveillance; Visualization; Object detection; Remote sensing; Ship detection; saliency detection; coastline extraction; object location; CNN

资金

  1. National key R&D Plan on Strategic International Scientific and Technological Innovation Cooperation Special Project [2016YFE0202300]
  2. National Natural Science Foundation of China [61671332, 41771452, 41771454]
  3. Guangzhou Science and Technology Project [201604020070]
  4. Key Research and Development Program of Hubei Province of China [2016AAA018]

向作者/读者索取更多资源

Real-time detection of inshore ships plays an essential role in the efficient monitoring and management of maritime traffic and transportation for port management. Current ship detection methods which are mainly based on remote sensing images or radar images hardly meet real-time requirement due to the timeliness of image acquisition. In this paper, we propose to use visual images captured by an on-land surveillance camera network to achieve real-time detection. However, due to the complex background of visual images and the diversity of ship categories, the existing convolution neural network (CNN) based methods are either inaccurate or slow. To achieve high detection accuracy and real-time performance simultaneously, we propose a saliency-aware CNN framework for ship detection, comprising comprehensive ship discriminative features, such as deep feature, saliency map, and coastline prior. This model uses CNN to predict the category and the position of ships and uses the global contrast based salient region detection to correct the location. We also extract coastline information and respectively incorporate it into CNN and saliency detection to obtain more accurate ship locations. We implement our model on Darknet under CUDA 8.0 and CUDNN V5 and use a real-world visual image dataset for training and evaluation. The experimental results show that our model outperforms representative counterparts (Faster R-CNN, SSD, and YOLOv2) in terms of accuracy and speed.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据