4.6 Article

A survey on deep multimodal learning for computer vision: advances, trends, applications, and datasets

期刊

VISUAL COMPUTER
卷 38, 期 8, 页码 2939-2970

出版社

SPRINGER
DOI: 10.1007/s00371-021-02166-7

关键词

Applications; Computer vision; Datasets; Deep learning; Sensory modalities; Multimodal learning

向作者/读者索取更多资源

The research discusses the rapid progress in multimodal learning, particularly in computer vision, with a focus on developing deep models that integrate heterogeneous visual cues. Key concepts and algorithms in deep multimodal learning are explored to enhance understanding within the computer vision community. The article summarizes six perspectives on deep multimodal learning and presents benchmark datasets for solving problems in various vision domains, while also highlighting limitations, challenges, and future research directions.
The research progress in multimodal learning has grown rapidly over the last decade in several areas, especially in computer vision. The growing potential of multimodal data streams and deep learning algorithms has contributed to the increasing universality of deep multimodal learning. This involves the development of models capable of processing and analyzing the multimodal information uniformly. Unstructured real-world data can inherently take many forms, also known as modalities, often including visual and textual content. Extracting relevant patterns from this kind of data is still a motivating goal for researchers in deep learning. In this paper, we seek to improve the understanding of key concepts and algorithms of deep multimodal learning for the computer vision community by exploring how to generate deep models that consider the integration and combination of heterogeneous visual cues across sensory modalities. In particular, we summarize six perspectives from the current literature on deep multimodal learning, namely: multimodal data representation, multimodal fusion (i.e., both traditional and deep learning-based schemes), multitask learning, multimodal alignment, multimodal transfer learning, and zero-shot learning. We also survey current multimodal applications and present a collection of benchmark datasets for solving problems in various vision domains. Finally, we highlight the limitations and challenges of deep multimodal learning and provide insights and directions for future research.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据