4.7 Article

Towards well-generalizing meta-learning via adversarial task augmentation

Journal

ARTIFICIAL INTELLIGENCE
Volume 317, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.artint.2023.103875

Keywords

Meta -learning; Few -shot learning; Adversarial task augmentation

Ask authors/readers for more resources

Meta-learning aims to use previous task knowledge to facilitate learning novel tasks, and task augmentation can increase the diversity of training tasks to improve the generalization capability of meta-learning models.
Meta-learning aims to use the knowledge from previous tasks to facilitate the learning of novel tasks. Many meta-learning models elaborately design various task-shared inductive bias, and learn it from a large number of tasks, so the generalization capability of the learned inductive bias depends on the diversity of the training tasks. A common assumption in meta-learning is that the training tasks and the test tasks come from the same or similar task distributions. However, this is usually not strictly satisfied in practice, so meta-learning models need to cope with various novel in-domain or cross-domain tasks. To this end, we propose to use task augmentation to increase the diversity of training tasks, thereby improving the generalization capability of meta-learning models. Concretely, we consider the worst-case problem around the base task distribution, and derive the adversarial task augmentation method which can generate inductive bias-adaptive 'challenging' tasks. Our method can be used as a simple plug-and-play module for various meta-learning models, and improve their generalization capability. We conduct extensive experiments under in-domain and cross-domain few-shot learning and unsupervised few-shot learning settings, and evaluate our method on different types of data (images and text). Experimental results show that our method can effectively improve the generalization capability of various meta-learning models under different settings. (c) 2023 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available