4.7 Article

Perspectives and Prospects on Transformer Architecture for Cross-Modal Tasks with Language and Vision

Journal

INTERNATIONAL JOURNAL OF COMPUTER VISION
Volume 130, Issue 2, Pages 435-454

Publisher

SPRINGER
DOI: 10.1007/s11263-021-01547-8

Keywords

Language and vision; Transformer; Attention; BERT

Ask authors/readers for more resources

This paper reviews the milestones of Transformer architectures in the field of computational linguistics and discusses their application trends and limitations in visuolinguistic cross-modal tasks. It also speculates on some future prospects.
Transformer architectures have brought about fundamental changes to computational linguistic field, which had been dominated by recurrent neural networks for many years. Its success also implies drastic changes in cross-modal tasks with language and vision, and many researchers have already tackled the issue. In this paper, we review some of the most critical milestones in the field, as well as overall trends on how transformer architecture has been incorporated into visuolinguistic cross-modal tasks. Furthermore, we discuss its current limitations and speculate upon some of the prospects that we find imminent.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available