4.5 Article

Foreground segmentation using convolutional neural networks for multiscale feature encoding

Journal

PATTERN RECOGNITION LETTERS
Volume 112, Issue -, Pages 256-262

Publisher

ELSEVIER
DOI: 10.1016/j.patrec.2018.08.002

Keywords

Foreground segmentation; Background subtraction; Deep learning; Convolutional neural networks; Video surveillance; Pixel classification

Ask authors/readers for more resources

Several methods have been proposed to solve moving objects segmentation problem accurately in different scenes. However, many of them lack the ability of handling various difficult scenarios such as illumination changes, background or camera motion, camouflage effect, shadow etc. To address these issues, we propose two robust encoder-decoder type neural networks that generate multi-scale feature encodings in different ways and can be trained end-to-end using only a few training samples. Using the same encoder-decoder configurations, in the first model, a triplet of encoders take the inputs in three scales to embed an image in a multi-scale feature space; in the second model, a Feature Pooling Module (FPM) is plugged on top of a single input encoder to extract multi-scale features in the middle layers. Both models use a transposed convolutional network in the decoder part to learn a mapping from feature space to image space. In order to evaluate our models, we entered the Change Detection 2014 Challenge (changedetection.net) and our models, namely FgSegNet_M and FgSegNet_S, outperformed all the existing state-of-the-art methods by an average F-Measure of 0.9770 and 0.9804, respectively. We also evaluate our models on SBI2015 and UCSD Background Subtraction datasets. Our source code is made publicly available at https://github.com/lim-anggun/FgSegNet. (c) 2018 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available