Journal
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022)
Volume -, Issue -, Pages 1280-1289Publisher
IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.00135
Keywords
-
Funding
- NSF [1718221, 2008387, 2045586, 2106825]
- NIFA [2020-67021-32799]
- Cisco Systems Inc. [CG 1377144]
- MRI [1725729]
Ask authors/readers for more resources
Masked-attention Mask Transformer (Mask2Former) is a new architecture capable of addressing any image segmentation task and outperforms specialized architectures on multiple datasets.
Image segmentation groups pixels with different semantics, e.g., category or instance membership. Each choice of semantics defines a task. While only the semantics of each task differ, current research focuses on designing specialized architectures for each task. We present Masked-attention Mask Transformer (Mask2Former), a new architecture capable of addressing any image segmentation task (panoptic, instance or semantic). Its key components include masked attention, which extracts localized features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at least three times, it outperforms the best specialized architectures by a significant margin on four popular datasets. Most notably, Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7 mIoU on ADE20K).
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available