3.8 Proceedings Paper

Generalizable Multi-Camera 3D Pedestrian Detection

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPRW53098.2021.00135

Keywords

-

Funding

  1. Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) [425401/2018-9]

Ask authors/readers for more resources

The proposed method is a novel multi-camera 3D pedestrian detection approach that does not require training data from the target scene. By combining human body poses and bounding boxes, it successfully estimates pedestrian locations and introduces a new formulation for fusion. Evaluation on the WILDTRACK dataset shows superior performance compared to state-of-the-art generalizable detection techniques, with a MODA of 0.569 and an F-score of 0.78.
We present a multi-camera 3D pedestrian detection method that does not need to train using data from the target scene. We estimate pedestrian location on the ground plane using a novel heuristic based on human body poses and person's bounding boxes from an off-the-shelf monocular detector. We then project these locations onto the world ground plane and fuse them with a new formulation of a clique cover problem. We also propose an optional step for exploiting pedestrian appearance during fusion by using a domain-generalizable person re-identification model. We evaluated the proposed approach on the challenging WILDTRACK dataset. It obtained a MODA of 0.569 and an F-score of 0.78, superior to state-of-the-art generalizable detection techniques.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available