4.6 Article

An Effective Framework Using Spatial Correlation and Extreme Learning Machine for Moving Cast Shadow Detection

Journal

APPLIED SCIENCES-BASEL
Volume 9, Issue 23, Pages -

Publisher

MDPI
DOI: 10.3390/app9235042

Keywords

moving cast shadow; feature extraction; extreme learning machine; spatial correlation; post processing

Funding

  1. National Natural Science Foundation of China [61602221, 61602222, 61967010, 31872847, 61907007, 41661083]
  2. Natural Science Foundation of Shandong Province [ZR2017QF011]
  3. Weifang Science and Technology Development Plan Project [2018GX009, 2018GX004, 2019GX003]
  4. Project of Doctoral Foundation of Weifang University [2015BS10, 2018B511]
  5. Provincial Key Research and Development Program of Jiangxi [20181ACE50030]

Ask authors/readers for more resources

Moving cast shadows of moving objects significantly degrade the performance of many high-level computer vision applications such as object tracking, object classification, behavior recognition and scene interpretation. Because they possess similar motion characteristics with their objects, moving cast shadow detection is still challenging. In this paper, we present a novel moving cast shadow detection framework based on the extreme learning machine (ELM) to efficiently distinguish shadow points from the foreground object. First, according to the physical model of shadows, pixel-level features of different channels in different color spaces and region-level features derived from the spatial correlation of neighboring pixels are extracted from the foreground. Second, an ELM-based classification model is developed by labelled shadow and unlabelled shadow points, which is able to rapidly distinguish the points in the new input whether they belong to shadows or not. Finally, to guarantee the integrity of shadows and objects for further image processing, a simple post-processing procedure is designed to refine the results, which also drastically improves the accuracy of moving shadow detection. Extensive experiments on two publicly common datasets including 13 different scenes demonstrate that the performance of the proposed framework is superior to representative state-of-the-art methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available