3.8 Proceedings Paper

Pretraining boosts out-of-domain robustness for pose estimation

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/WACV48630.2021.00190

Keywords

-

Funding

  1. Rowland Fellowship
  2. CZI EOSS Grant
  3. Bertarelli Foundation
  4. German Federal Ministry of Education and Research (BMBF) through the Tubingen AI Center [FKZ: 01IS18039A]

Ask authors/readers for more resources

This study investigates the generalization ability of neural networks for pose estimation and introduces a new dataset of 30 horses for testing model robustness in both within- and out-of-domain scenarios. The findings suggest that architectures pretrained on ImageNet perform better on both within- and out-of-domain data, and transfer learning is beneficial for out-of-domain robustness in pose estimation tasks.
Neural networks are highly effective tools for pose estimation. However, as in other computer vision tasks, robustness to out-of-domain data remains a challenge, especially for small training sets that are common for real-world applications. Here, we probe the generalization ability with three architecture classes (MobileNetV2s, ResNets, and EfficientNets) for pose estimation. We developed a dataset of 30 horses that allowed for both within-domain and out-of-domain (unseen horse) benchmarking-this is a crucial test for robustness that current human pose estimation benchmarks do not directly address. We show that better ImageNet-performing architectures perform better on both within- and out-of-domain data if they are first pretrained on ImageNet. We additionally show that better ImageNet models generalize better across animal species. Furthermore, we introduce Horse-C, a new benchmark for common corruptions for pose estimation, and confirm that pretraining increases performance in this domain shift context as well. Overall, our results demonstrate that transfer learning is beneficial for out-of-domain robustness.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available