3.8 Proceedings Paper

Improving Meta-learning for Low-resource Text Classification and Generation via Memory Imitation

Publisher

ASSOC COMPUTATIONAL LINGUISTICS-ACL

Keywords

-

Funding

  1. Hong Kong Research Grants Council [16204920]
  2. National Natural Science Foundation of China [62106275]

Ask authors/readers for more resources

Building models of natural language processing in low-resource scenarios is challenging. Optimization-based meta-learning algorithms achieve promising results by adapting a well-generalized model initialization to handle new tasks. However, these approaches suffer from the issue of memorization overfitting. To address this issue, we propose a memory imitation meta-learning method that enhances the model's reliance on support sets for task adaptation.
Building models of natural language processing (NLP) is challenging in low-resource scenarios where only limited data are available. Optimization-based meta-learning algorithms achieve promising results in low-resource scenarios by adapting a well-generalized model initialization to handle new tasks. Nonetheless, these approaches suffer from the memorization overfitting issue, where the model tends to memorize the meta-training tasks while ignoring support sets when adapting to new tasks. To address this issue, we propose a memory imitation meta-learning (MemIML) method that enhances the model's reliance on support sets for task adaptation. Specifically, we introduce a task-specific memory module to store support set information and construct an imitation module to force query sets to imitate the behaviors of some representative support-set samples stored in the memory. A theoretical analysis is provided to prove the effectiveness of our method, and empirical results also demonstrate that our method outperforms competitive baselines on both text classification and generation tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available