3.8 Proceedings Paper

Rethinking Spatial Invariance of Convolutional Networks for Object Counting

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.01902

关键词

-

资金

  1. Air Force Research Laboratory [FA8750-19-2-0200]
  2. U.S. Department of Commerce, National Institute of Standards and Technology (NIST) [60NANB17D156]
  3. Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DOI/IBC) [D17PC00340]
  4. Defense Advanced Research Projects Agency (DARPA) [HR00111990063]

向作者/读者索取更多资源

Previous research suggests that improving the spatial invariance of convolutional networks is crucial for object counting. However, strict pixel-level spatial invariance can lead to overfit noise in density map generation. In this study, we propose the use of locally connected Gaussian kernels instead of the original convolution filter to estimate the spatial position in the density map, in order to overcome annotation noise. We also introduce a low-rank approximation accompanied with translation invariance to approximate massive Gaussian convolution. Experimental results demonstrate that our methods outperform state-of-the-art approaches in object counting.
Previous work generally believes that improving the spatial invariance of convolutional networks is the key to object counting. However, after verifying several mainstream counting networks, we surprisingly found too strict pixel-level spatial invariance would cause overfit noise in the density map generation. In this paper, we try to use locally connected Gaussian kernels to replace the original convolution filter to estimate the spatial position in the density map. The purpose of this is to allow the feature extraction process to potentially stimulate the density map generation process to overcome the annotation noise. Inspired by previous work, we propose a low-rank approximation accompanied with translation invariance to favorably implement the approximation of massive Gaussian convolution. Our work points a new direction for follow-up research, which should investigate how to properly relax the overly strict pixel-level spatial invariance for object counting. We evaluate our methods on 4 mainstream object counting networks (i.e., MCNN, CSRNet, SANet, and ResNet-50). Extensive experiments were conducted on 7 popular benchmarks for 3 applications (i.e., crowd, vehicle, and plant counting). Experimental results show that our methods significantly outperform other state-of-the-art methods and achieve promising learning of the spatial position of objects(1).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据