4.6 Article

An Efficient Anomaly Detection System for Crowded Scenes Using Variational Autoencoders

Journal

APPLIED SCIENCES-BASEL
Volume 9, Issue 16, Pages -

Publisher

MDPI
DOI: 10.3390/app9163337

Keywords

video surveillance system; anomaly detection; unsupervised learning; convolutional auto-encoder; variational auto-encoder

Funding

  1. National Natural Science Foundation of China [61701101, U1713216, U1613214]
  2. National Key R&D Program of China [2017YFC0821402]
  3. National Key Robot Project [2017YFB1300900, 2017YFB1301103]
  4. Fundamental Research Fund for the Central Universities of China [N172603001, N172604004, N172604002, N181602014]

Ask authors/readers for more resources

Anomaly detection in crowded scenes is an important and challenging part of the intelligent video surveillance system. As the deep neural networks make success in feature representation, the features extracted by a deep neural network represent the appearance and motion patterns in different scenes more specifically, comparing with the hand-crafted features typically used in the traditional anomaly detection approaches. In this paper, we propose a new baseline framework of anomaly detection for complex surveillance scenes based on a variational auto-encoder with convolution kernels to learn feature representations. Firstly, the raw frames series are provided as input to our variational auto-encoder without any preprocessing to learn the appearance and motion features of the receptive fields. Then, multiple Gaussian models are used to predict the anomaly scores of the corresponding receptive fields. Our proposed two-stage anomaly detection system is evaluated on the video surveillance dataset for a large scene, UCSD pedestrian datasets, and yields competitive performance compared with state-of-the-art methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available