3.8 Proceedings Paper

TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.01177

Keywords

-

Funding

  1. NSFC [61733007, 61876212, 62071127, 61773176]
  2. Zhejiang Laboratory Grant [2019NB0AB02, 2021KH0AB05]

Ask authors/readers for more resources

This paper presents a mobile-friendly architecture called Token Pyramid Vision Transformer (TopFormer), which utilizes tokens from various scales to generate scale-aware semantic features and achieves a good trade-off between accuracy and latency.
Although vision transformers (ViTs) have achieved great success in computer vision, the heavy computational cost hampers their applications to dense prediction tasks such as semantic segmentation on mobile devices. In this paper, we present a mobile-friendly architecture named Token Pyramid Vision Transformer (TopFormer). The proposed TopFormer takes Tokens from various scales as input to produce scale-aware semantic features, which are then injected into the corresponding tokens to augment the representation. Experimental results demonstrate that our method significantly outperforms CNN- and ViT-based networks across several semantic segmentation datasets and achieves a good trade-off between accuracy and latency. On the ADE20K dataset, TopFormer achieves 5% higher accuracy in mIoU than MobileNetV3 with lower latency on an ARM-based mobile device. Furthermore, the tiny version of TopFormer achieves real-time inference on an ARM-based mobile device with competitive results. The code and models are available at: https://github.com/hustvl/TopFormer.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available