3.8 Proceedings Paper

Masked Autoencoders Are Scalable Vision Learners

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.01553

关键词

-

向作者/读者索取更多资源

This paper presents a self-supervised learning method for computer vision based on masked autoencoders. By masking a portion of the input image and reconstructing the missing pixels, large models can be trained efficiently and effectively. The approach achieves high generalization performance and outperforms supervised pretraining in transfer learning tasks.
This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs enables us to train large models efficiently and effectively: we accelerate training (by 3x or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-IK data. Transfer performance in downstream tasks outperforms supervised pretraining and shows promising scaling behavior.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据