4.8 Article

Video Pivoting Unsupervised Multi-Modal Machine Translation

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2022.3181116

Keywords

Visualization; 3G mobile communication; Machine translation; Task analysis; Transformers; Training; Feature extraction; Multi-modal machine translation; unsupervised learning; visual-semantic embedding space; spatial-temporal graph

Ask authors/readers for more resources

The main challenge in unsupervised machine translation is to associate source-target sentences in the latent space. Various unsupervised multi-modal machine translation models have been proposed to improve performance by employing visual contents in natural images. However, current state-of-the-art methods are sensitive to spurious correlations as they do not explicitly model object relations.
The main challenge in the field of unsupervised machine translation (UMT) is to associate source-target sentences in the latent space. As people who speak different languages share biologically similar visual systems, various unsupervised multi-modal machine translation (UMMT) models have been proposed to improve the performances of UMT by employing visual contents in natural images to facilitate alignment. Commonly, relation information is the important semantic in a sentence. Compared with images, videos can better present the interactions between objects and the ways in which an object transforms over time. However, current state-of-the-art methods only explore scene-level or object-level information from images without explicitly modeling objects relation; thus, they are sensitive to spurious correlations, which poses a new challenge for UMMT models. In this paper, we employ a spatial-temporal graph obtained from videos to exploit object interactions in space and time for disambiguation purposes and to promote latent space alignment in UMMT. Our model employs multi-modal back-translation and features pseudo-visual pivoting, in which we learn a shared multilingual visual-semantic embedding space and incorporate visually pivoted captioning as additional weak supervision. Experimental results on the VATEX Translation 2020 and HowToWorld datasets validate the translation capabilities of our model on both sentence-level and word-level and generalizes well when videos are not available during the testing phase.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available