4.8 Article

Multi-Dataset, Multitask Learning of Egocentric Vision Tasks

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3061479

Keywords

Task analysis; Training; Feature extraction; Activity recognition; Object detection; Annotations; Training data; Egocentric vision; action recognition; multi-dataset training; multitask learning

Ask authors/readers for more resources

In this paper, a multitask learning scheme is proposed to address the scarcity of labeled data in egocentric vision tasks like action recognition. Related tasks and datasets are incorporated into the training process, resulting in improved action recognition performance. To overcome the issue of different action labels across datasets, the multitask paradigm is extended to include datasets with different label sets. Experiments on multiple datasets demonstrate the effectiveness of the proposed approach and its ability to automatically discover cross-dataset task correlations.
For egocentric vision tasks such as action recognition, there is a relative scarcity of labeled data. This increases the risk of overfitting during training. In this paper, we address this issue by introducing a multitask learning scheme that employs related tasks as well as related datasets in the training process. Related tasks are indicative of the performed action, such as the presence of objects and the position of the hands. By including related tasks as additional outputs to be optimized, action recognition performance typically increases because the network focuses on relevant aspects in the video. Still, the training data is limited to a single dataset because the set of action labels usually differs across datasets. To mitigate this issue, we extend the multitask paradigm to include datasets with different label sets. During training, we effectively mix batches with samples from multiple datasets. Our experiments on egocentric action recognition in the EPIC-Kitchens, EGTEA Gaze+, ADL and Charades-EGO datasets demonstrate the improvements of our approach over single-dataset baselines. On EGTEA we surpass the current state-of-the-art by 2.47 percent. We further illustrate the cross-dataset task correlations that emerge automatically with our novel training scheme.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available