4.7 Article

Adaptive Deep Metric Learning for Affective Image Retrieval and Classification

期刊

IEEE TRANSACTIONS ON MULTIMEDIA
卷 23, 期 -, 页码 1640-1653

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2020.3001527

关键词

Measurement; Visualization; Semantics; Feature extraction; Task analysis; Image analysis; Image retrieval; Affective image retrieval; convolutional neural network; deep metric learning; visual sentiment analysis

资金

  1. Major Project for New Generation of AI [2018AAA0100403]
  2. NSFC [61876094, U1933114]
  3. Natural Science Foundation of Tianjin, China [18JCYBJC15400, 18ZXZNGX00110]
  4. Open Project Program of the National Laboratory of Pattern Recognition (NLPR)
  5. Fundamental Research Funds for the Central Universities

向作者/读者索取更多资源

The paper introduces a method for processing affective images through adaptive deep metric learning, which enhances the recognition of emotional images by designing adaptive sentiment similarity loss and sentiment vector, while also proposing a unified multi-task deep framework.
An image is worth a thousand words. Many researchers have conducted extensive studies to understand visual emotions since an increasing number of users express emotions via images and videos online. However, most existing methods based on convolutional neural networks aim to retrieve and classify affective images in a discrete label space while ignoring both the hierarchical and complex nature of emotions. On the one hand, different from concrete and isolated object concepts (e.g., cat and dog), a hierarchical relationship exists among emotions. On the other hand, most widely used deep methods depend on the representation from fully connected layers, which lacks the essential texture information for recognizing emotions. In this work, we address the above problems via adaptive deep metric learning. Specifically, we design an adaptive sentiment similarity loss, which is able to embed affective images considering the emotion polarity and adaptively adjust the margin between different image pairs. To effectively distinguish affective images, we further propose the sentiment vector that captures the texture information extracted from multiple convolutional layers. Finally, we develop a unified multi-task deep framework to simultaneously optimize both retrieval and classification goals. Extensive and thorough evaluations on four benchmark datasets demonstrate that the proposed framework performs favorably against the state-of-the-art methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据