4.7 Article

Cross-Modality Bridging and Knowledge Transferring for Image Understanding

Journal

IEEE TRANSACTIONS ON MULTIMEDIA
Volume 21, Issue 10, Pages 2675-2685

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2019.2903448

Keywords

Object and scene recognition; image semantic search; cross-modality bridging; multi-task learning; knowledge transferring

Funding

  1. National Basic Research Program of China (973-Program) [2015CB351802]
  2. National Natural Science Foundation of China [61771457, 61732007, 61572488, 61472389, 61872362, U163621, 61671196, 61525206, 61672497]
  3. Key Research Program of Frontier Sciences, CAS [QYZDJ-SSW-SYS013]
  4. Zhejiang Province Nature Science Foundation of China [LR17F030006]
  5. National Key Research and Development Program of China [2017YFC0820600]

Ask authors/readers for more resources

The understanding of web images has been a hot research topic in both artificial intelligence and multimedia content analysis domains. The web images are composed of various complex foregrounds and backgrounds, which makes the design of an accurate and robust learning algorithm a challenging task. To solve the above significant problem, first, we learn a cross-modality bridging dictionary for the deep and complete understanding of a vast quantity of web images. The proposed algorithm leverages the visual features into the semantic concept probability distribution, which can construct a global semantic description for images while preserving the local geometric structure. To discover and model the occurrence patterns between intra- and inter-categories, multi-task learning is introduced for formulating the objective formulation with Capped-l(1) penalty, which can obtain the optimal solution with a higher probability and outperform the traditional convex function-based methods. Second, we propose a knowledge-based concept transferring algorithm to discover the underlying relations of different categories. This distribution probability transferring among categories can bring the more robust global feature representation, and enable the image semantic representation to generalize better as the scenario becomes larger. Experimental comparisons and performance discussion with classical methods on the ImageNet, Caltech-256, SUN397, and Scene15 datasets show the effectiveness of our proposed method at three traditional image understanding tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available