4.8 Article

Video Pivoting Unsupervised Multi-Modal Machine Translation

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2022.3181116

关键词

Visualization; 3G mobile communication; Machine translation; Task analysis; Transformers; Training; Feature extraction; Multi-modal machine translation; unsupervised learning; visual-semantic embedding space; spatial-temporal graph

向作者/读者索取更多资源

The main challenge in unsupervised machine translation is to associate source-target sentences in the latent space. Various unsupervised multi-modal machine translation models have been proposed to improve performance by employing visual contents in natural images. However, current state-of-the-art methods are sensitive to spurious correlations as they do not explicitly model object relations.
The main challenge in the field of unsupervised machine translation (UMT) is to associate source-target sentences in the latent space. As people who speak different languages share biologically similar visual systems, various unsupervised multi-modal machine translation (UMMT) models have been proposed to improve the performances of UMT by employing visual contents in natural images to facilitate alignment. Commonly, relation information is the important semantic in a sentence. Compared with images, videos can better present the interactions between objects and the ways in which an object transforms over time. However, current state-of-the-art methods only explore scene-level or object-level information from images without explicitly modeling objects relation; thus, they are sensitive to spurious correlations, which poses a new challenge for UMMT models. In this paper, we employ a spatial-temporal graph obtained from videos to exploit object interactions in space and time for disambiguation purposes and to promote latent space alignment in UMMT. Our model employs multi-modal back-translation and features pseudo-visual pivoting, in which we learn a shared multilingual visual-semantic embedding space and incorporate visually pivoted captioning as additional weak supervision. Experimental results on the VATEX Translation 2020 and HowToWorld datasets validate the translation capabilities of our model on both sentence-level and word-level and generalizes well when videos are not available during the testing phase.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据