4.7 Article

Spatiotemporal Dilated Convolution With Uncertain Matching for Video-Based Crowd Estimation

期刊

IEEE TRANSACTIONS ON MULTIMEDIA
卷 24, 期 -, 页码 261-273

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2021.3050059

关键词

Feature extraction; Convolution; Training; Spatiotemporal phenomena; Annotations; Three-dimensional displays; Videos; Crowd counting; density map regression; dilated convolution; patch-wise regression loss; spatiotemporal modeling

资金

  1. Ministry of Science and Technology of Taiwan [MOST-109-2221-E-009-114MY3, MOST-109-2634-F-009-018, MOST-109-2221-E-001-015, MOST-1082218-E-002-055, MOST-109-2223-E-009-002-MY3, MOST-109-2218-E-009025, MOST-109-2218-E-002-015]

向作者/读者索取更多资源

This paper proposes a novel SpatioTemporal convolutional Dense Network (STDNet) for video-based crowd counting problem. The network decomposes 3D convolution and utilizes 3D spatiotemporal dilated dense convolution to alleviate the rapid growth of model size. The combination of dilated convolution and channel attention block enhances feature representations. A new patch-wise regression loss (PRL) is proposed to improve the original pixel-wise loss for better convergence. Experimental results demonstrate the superiority of STDNet over state-of-the-art methods in video-based crowd counting.
In this paper, we propose a novel SpatioTemporal convolutional Dense Network (STDNet) to address the video-based crowd counting problem, which contains the decomposition of 3D convolution and the 3D spatiotemporal dilated dense convolution to alleviate the rapid growth of the model size caused by the Conv3D layer. Moreover, since the dilated convolution extracts the multiscale features, we combine the dilated convolution with the channel attention block to enhance the feature representations. Due to the error that occurs from the difficulty of labeling crowds, especially for videos, imprecise or standard-inconsistent labels may lead to poor convergence for the model. To address this issue, we further propose a new patch-wise regression loss (PRL) to improve the original pixel-wise loss. Experimental results on three video-based benchmarks, i.e., the UCSD, Mall and WorldExpo'10 datasets, show that STDNet outperforms both image- and video-based state-of-the-art methods. The source codes are released at https://github.com/STDNet/STDNet.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据