4.8 Article

A Survey on Vision Transformer

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2022.3152247

关键词

Transformers; Task analysis; Encoding; Computer vision; Computational modeling; Visualization; Object detection; high-level vision; low-level vision; self-attention; transformer; video

向作者/读者索取更多资源

Transformer, a deep neural network with a self-attention mechanism, has been initially used in natural language processing and is now gaining attention in computer vision tasks. Transformer-based models perform as well as or even better than convolutional and recurrent neural networks in various visual benchmarks. This paper reviews vision transformer models, categorizes them based on different tasks, and analyzes their advantages and disadvantages. The discussed categories include backbone network, high/mid-level vision, low-level vision, and video processing. Efficient methods for applying transformer in real device-based applications are also explored. The challenges and further research directions for vision transformers are discussed as well.
Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformer to computer vision tasks. In a variety of visual benchmarks, transformer-based models perform similar to or better than other types of networks such as convolutional and recurrent neural networks. Given its high performance and less need for vision-specific inductive bias, transformer is receiving more and more attention from the computer vision community. In this paper, we review these vision transformer models by categorizing them in different tasks and analyzing their advantages and disadvantages. The main categories we explore include the backbone network, high/mid-level vision, low-level vision, and video processing. We also include efficient transformer methods for pushing transformer into real device-based applications. Furthermore, we also take a brief look at the self-attention mechanism in computer vision, as it is the base component in transformer. Toward the end of this paper, we discuss the challenges and provide several further research directions for vision transformers.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据