4.7 Article

Hierarchical Visual-Textual Knowledge Distillation for Life-Long Correlation Learning

期刊

INTERNATIONAL JOURNAL OF COMPUTER VISION
卷 129, 期 4, 页码 921-941

出版社

SPRINGER
DOI: 10.1007/s11263-020-01392-1

关键词

Cross-modal retrieval; Life-long learning; Hierarchical knowledge distillation; Attention transfer; Adaptive network expansion

向作者/读者索取更多资源

The study introduces the concept of lifelong learning into visual-textual cross-modal correlation modeling, proposing a visual-textual lifelong knowledge distillation (VLKD) approach. By constructing a hierarchical recurrent network, knowledge from both semantic and attention levels is leveraged across domains and modalities, supporting cross-modal retrieval in lifelong scenarios across various domains.
Correlation learning among different types of multimedia data, such as visual and textual content, faces huge challenges from two important perspectives, namely, cross modal and cross domain. Cross modal means the heterogeneous properties of different types of multimedia data, where the data from different modalities have inconsistent distributions and representations. This situation leads to the first challenge: cross-modal similarity measurement. Cross domain means the multisource property of multimedia data from various domains, in which data from new domains arrive continually, leading to the second challenge: model storage and retraining. Therefore, correlation learning requires a cross-modal continual learning approach, in which only the data from the new domains are used for training, but the previously learned correlation capabilities are preserved. To address the above issues, we introduce the idea of life-long learning into visual-textual cross-modal correlation modeling and propose a visual-textual life-long knowledge distillation (VLKD) approach. In this study, we construct a hierarchical recurrent network that can leverage knowledge from both semantic and attention levels through adaptive network expansion to support cross-modal retrieval in life-long scenarios across various domains. The results of extensive experiments performed on multiple cross-modal datasets with different domains verify the effectiveness of the proposed VLKD approach for life-long cross-modal retrieval.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据