4.2 Article

Enhancing Human Pose Estimation in Ancient Vase Paintings via Perceptually-grounded Style Transfer Learning

Journal

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3569089

Keywords

Pose estimation; Greek vase paintings; style transfer learning; digital humanities

Ask authors/readers for more resources

Human pose estimation is crucial for interpreting visual narratives and body movements in artworks, but existing methods lack generalization across domains leading to poor pose recognition. To address this, we propose a two-step approach: (1) adapting a dataset of natural images to the style of Greek vase paintings through perceptually-grounded style transfer, and fine-tuning the base model. Our method significantly improves the state-of-the-art performance on unlabelled data. (2) To further improve results, we create a dataset of ancient Greek vase paintings with person and pose annotations and fine-tune a style-transferred model on this data. We also provide a pose-based image retrieval to demonstrate the effectiveness of our method.
Human pose estimation (HPE) is a central part of understanding the visual narration and body movements of characters depicted in artwork collections, such as Greek vase paintings. Unfortunately, existing HPE methods do not generalise well across domains resulting in poorly recognised poses. Therefore, we propose a two step approach: (1) adapting a dataset of natural images of known person and pose annotations to the style of Greek vase paintings by means of image styletransfer. We introduce a perceptually-grounded style transfer training to enforce perceptual consistency. Then, we finetune the base model with this newly created dataset. We show that using style-transfer learning significantly improves the SOTA performance on unlabelled data by more than 6% mean average precision (mAP) as well as mean average recall (mAR). (2) To improve the already strong results further, we created a small dataset (ClassArch) consisting of ancient Greek vase paintings from the 6-5th century BCE with person and pose annotations. We show that fine-tuning on this data with a style-transferred model improves the performance further. In a thorough ablation study, we give a targeted analysis of the influence of style intensities, revealing that the model learns generic domain styles. Additionally, we provide a posebased image retrieval to demonstrate the effectiveness of our method. The code and pretrained models can be found at https://github.com/angelvillar96/STLPose.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available