3.8 Proceedings Paper

MULTI-MODAL REPRESENTATION LEARNING FOR SHORT VIDEO UNDERSTANDING AND RECOMMENDATION

Publisher

IEEE
DOI: 10.1109/ICMEW.2019.00134

Keywords

Multi-modal Representation; Factorization Machine; Key-Value Memory; Word2Vec; DeepWalk

Ask authors/readers for more resources

We study the task of short video understanding and recommendation which predicts the user's preference based on multimodal contents, including visual features, text features, audio features and user interactive history. In this paper, we present a multi-modal representation learning method to improve the performance of recommender systems. The method first converts multi-modal contents into vectors in the embedding space, and then concatenates these vectors as the input of a multi-layer perceptron to make prediction. We also propose a novel Key-Value Memory to map dense real-values into vectors, which could obtain more sufficient semantic in a nonlinear manner. Experimental results show that our representation significantly improves several baselines and achieves the superior performance on the dataset of ICME 2019 Short Video Understanding and Recommendation Challenge.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available