3.8 Proceedings Paper

Model Agnostic Sample Reweighting for Out-of-Distribution Learning

出版社

JMLR-JOURNAL MACHINE LEARNING RESEARCH

关键词

-

资金

  1. National Key R&D Program of China [2018AAA0102004]
  2. National Natural Science Foundation of China [62141607, U1936219]
  3. [GRF 16201320]

向作者/读者索取更多资源

This paper proposes a method called MAPLE to effectively address OOD problem in large overparameterized models. By reweighting the training samples and considering a bilevel formulation to tackle overfitting, the superiority of MAPLE is empirically verified.
Distributionally robust optimization (DRO) and invariant risk minimization (IRM) are two popular methods proposed to improve out-of-distribution (OOD) generalization performance of machine learning models. While effective for small models, it has been observed that these methods can be vulnerable to overfitting with large overparameterized models. This work proposes a principled method, Model Agnostic samPLe rEweighting (MAPLE), to effectively address OOD problem, especially in overparameterized scenarios. Our key idea is to find an effective reweighting of the training samples so that the standard empirical risk minimization training of a large model on the weighted training data leads to superior OOD generalization performance. The overfitting issue is addressed by considering a bilevel formulation to search for the sample reweighting, in which the generalization complexity depends on the search space of sample weights instead of the model size. We present theoretical analysis in linear case to prove the insensitivity of MAPLE to model size, and empirically verify its superiority in surpassing state-of-the-art methods by a large margin. Code is available at https://github.com/x- zho14/MAPLE.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据