4.7 Article

Spatiotemporal Dilated Convolution With Uncertain Matching for Video-Based Crowd Estimation

Journal

IEEE TRANSACTIONS ON MULTIMEDIA
Volume 24, Issue -, Pages 261-273

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2021.3050059

Keywords

Feature extraction; Convolution; Training; Spatiotemporal phenomena; Annotations; Three-dimensional displays; Videos; Crowd counting; density map regression; dilated convolution; patch-wise regression loss; spatiotemporal modeling

Funding

  1. Ministry of Science and Technology of Taiwan [MOST-109-2221-E-009-114MY3, MOST-109-2634-F-009-018, MOST-109-2221-E-001-015, MOST-1082218-E-002-055, MOST-109-2223-E-009-002-MY3, MOST-109-2218-E-009025, MOST-109-2218-E-002-015]

Ask authors/readers for more resources

This paper proposes a novel SpatioTemporal convolutional Dense Network (STDNet) for video-based crowd counting problem. The network decomposes 3D convolution and utilizes 3D spatiotemporal dilated dense convolution to alleviate the rapid growth of model size. The combination of dilated convolution and channel attention block enhances feature representations. A new patch-wise regression loss (PRL) is proposed to improve the original pixel-wise loss for better convergence. Experimental results demonstrate the superiority of STDNet over state-of-the-art methods in video-based crowd counting.
In this paper, we propose a novel SpatioTemporal convolutional Dense Network (STDNet) to address the video-based crowd counting problem, which contains the decomposition of 3D convolution and the 3D spatiotemporal dilated dense convolution to alleviate the rapid growth of the model size caused by the Conv3D layer. Moreover, since the dilated convolution extracts the multiscale features, we combine the dilated convolution with the channel attention block to enhance the feature representations. Due to the error that occurs from the difficulty of labeling crowds, especially for videos, imprecise or standard-inconsistent labels may lead to poor convergence for the model. To address this issue, we further propose a new patch-wise regression loss (PRL) to improve the original pixel-wise loss. Experimental results on three video-based benchmarks, i.e., the UCSD, Mall and WorldExpo'10 datasets, show that STDNet outperforms both image- and video-based state-of-the-art methods. The source codes are released at https://github.com/STDNet/STDNet.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available