4.6 Article

Sentence part-enhanced BERT with respect to downstream tasks

Related references

Note: Only part of the references are listed.
Article Computer Science, Information Systems

Learning multimodal word representation with graph convolutional networks

Wenhao Zhu et al.

Summary: Research has shown that multimodal models outperform text-based models in learning semantic word representations. Inspired by the relationship among language modalities and the advantages of graph convolution network (GCN) in extracting non-European spatial features, a new multimodal word representation model, GCNW, incorporates phonetic and syntactic information using GCN and updates modal-relation matrix with a greedy strategy. The model is trained through unsupervised learning and has shown superior performance in various NLP tasks compared to strong unimodal baselines and existing multimodal models, with the source code available for reproducible research.

INFORMATION PROCESSING & MANAGEMENT (2021)

Article Computer Science, Artificial Intelligence

Enhanced Double-Carrier Word Embedding via Phonetics and Writing

Wenhao Zhu et al.

ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING (2020)