4.7 Article

Action recognition using optimized deep autoencoder and CNN for surveillance data streams of non-stationary environments

Publisher

ELSEVIER
DOI: 10.1016/j.future.2019.01.029

Keywords

Big data processing; Action recognition; Online data stream analysis; Optimized deep autoencoder; Convolutional neural network; Machine learning; Non-stationary environment

Funding

  1. National Research Foundation of Korea - Korea Government (MSIP) [2016R1A2B4011712]

Ask authors/readers for more resources

Action recognition is a challenging research area in which several convolutional neural networks (CNN) based action recognition methods are recently presented. However, such methods are inefficient for real-time online data stream processing with satisfied accuracy. Therefore, in this paper we propose an efficient and optimized CNN based system to process data streams in real-time, acquired from visual sensor of non-stationary surveillance environment. Firstly, frame level deep features are extracted using a pre-trained CNN model. Next, an optimized deep autoencoder (DAE) is introduced to learn temporal changes of the actions in the surveillance stream. Furthermore, a non-linear learning approach, quadratic SVM is trained for the classification of human actions. Finally, an iterative fine-tuning process is added in the testing phase that can update the parameters of trained model using the newly accumulated data of non-stationary environment. Experiments are conducted on benchmark datasets and results reveal the better performance of our system in terms of accuracy and running time compared to state-of-the-art methods. We believe that our proposed system is a suitable candidate for action recognition in surveillance data stream of non-stationary environments. (C) 2019 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available