期刊
MATHEMATICAL BIOSCIENCES AND ENGINEERING
卷 17, 期 6, 页码 7804-7818出版社
AMER INST MATHEMATICAL SCIENCES-AIMS
DOI: 10.3934/mbe.2020397
关键词
fire recognition; feature fusion; convolutional neural network; real-time video
资金
- CERNET Innovation Project [NGII20190605]
- High Education Science and Technology Planning Program of Shandong Provincial Education Department [J18KA340, J18KA385]
- Yantai Key Research and Development Program [2020YT06000970, 2019XDHZ081]
This paper proposes a real-time fire detection framework DeepFireNet that combines fire features and convolutional neural networks, which can be used to detect real-time video collected by monitoring equipment. DeepFireNet takes surveillance device video stream as input. To begin with, based on the static and dynamic characteristics of fire, a large number of non-fire images in the video stream are filtered. In the process, for the fire images in the video stream, the suspected fire area in the image is extracted. Eliminate the influence of light sources, candles and other interference sources to reduce the interference of complex environments on fire detection. Then, the algorithm encodes the extracted region and inputs it into DeepFireNet convolution network, which extracts the depth feature of the image and finally judges whether there is a fire in the image. DeepFireNet network replaces 5 x 5 convolution kernels in the inception layer with two 3 x 3 convolution kernels, and only uses three improved inception layers as the core architecture of the network, which effectively reduces the network parameters and significantly reduces the amount of computation. The experimental results show that this method can be applied to many different indoor and outdoor scenes. Besides, the algorithm effectively meets the requirements for the accuracy and real-time of the detection algorithm in the process of real-time video detection. This method has good practicability.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据