3.8 Proceedings Paper

Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.00442

Keywords

-

Funding

  1. MIAI@Grenoble Alpes [ANR-19-P3IA-0003]

Ask authors/readers for more resources

Standard learning approaches often result in fragile models that are prone to catastrophic forgetting, but we propose a domain randomization technique to build more robust models. By randomizing the distribution of the current domain, our approach shows reduced vulnerability to catastrophic forgetting when transferred to new domains through meta-learning.
Most standard learning approaches lead to fragile models which are prone to drift when sequentially trained on samples of a different nature-the well-known catastrophic forgetting issue. In particular, when a model consecutively learns from different visual domains, it tends to forget the past domains in favor of the most recent ones. In this context, we show that one way to learn models that are inherently more robust against forgetting is domain randomization-for vision tasks, randomizing the current domain's distribution with heavy image manipulations. Building on this result, we devise a meta-learning strategy where a regularizer explicitly penalizes any loss associated with transferring the model from the current domain to different auxiliary meta-domains, while also easing adaptation to them. Such meta-domains are also generated through randomized image manipulations. We empirically demonstrate in a variety of experiments-spanning from classification to semantic segmentation-that our approach results in models that are less prone to catastrophic forgetting when transferred to new domains.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available