4.7 Article

Universal Background Subtraction Using Word Consensus Models

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 25, Issue 10, Pages 4768-4781

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2016.2598691

Keywords

Video segmentation; word consensus; change detection; background subtraction; video signal processing

Funding

  1. NSERC, FRQ-NT Team Grant [2014-PR-172083]
  2. Regroupement pour l'etude des environnements partages intelligents repartis FRQ-NT strategic cluster

Ask authors/readers for more resources

Background subtraction is often used as the first step in video analysis and smart surveillance applications. However, the issue of inconsistent performance across different scenarios due to a lack of flexibility remains a serious concern. To address this, we propose a novel non-parametric, pixellevel background modeling approach based on word dictionaries that draws from traditional codebooks and sample consensus approaches. In this new approach, the importance of each background sample (or word) is evaluated online based on their recurrence among all local observations. This helps build smaller pixel models that are better suited for long-term foreground detection. Combining these models with a frame-level dictionary and local feedback mechanisms leads us to our proposed background subtraction method, coined PAWCS. Experiments on the 2012 and 2014 versions of the ChangeDetection. net data set show that PAWCS outperforms 26 previously tested and published methods in terms of overall F-Measure as well as in most categories taken individually. Our results can be reproduced with a C++ implementation available online.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available