3.8 Proceedings Paper

Learning to Generate Scene Graph from Natural Language Supervision

出版社

IEEE
DOI: 10.1109/ICCV48922.2021.00184

关键词

-

资金

  1. UW-Madison OVCRGE
  2. WARF
  3. National Science Foundation (NSF) [RI:1813709]

向作者/读者索取更多资源

This paper introduces a method that learns from image-sentence pairs to extract a graphical representation of localized objects and their relationships within an image, known as a scene graph. By leveraging an off-the-shelf object detector and designing a Transformer-based model to predict pseudo labels, the model achieves strong results for weakly and fully supervised scene graph generation tasks. The experiment results show a 30% relative gain over the latest method trained with human-annotated unlocalized scene graphs.
Learning from image-text data has demonstrated recent success for many recognition tasks, yet is currently limited to visual features or individual visual concepts such as objects. In this paper, we propose one of the first methods that learn from image-sentence pairs to extract a graphical representation of localized objects and their relationships within an image, known as scene graph. To bridge the gap between images and texts, we leverage an off-the-shelf object detector to identify and localize object instances, match labels of detected regions to concepts parsed from captions, and thus create pseudo labels for learning scene graph. Further, we design a Transformer-based model to predict these pseudo labels via a masked token prediction task. Learning from only image-sentence pairs, our model achieves 30% relative gain over a latest method trained with human-annotated unlocalized scene graphs. Our model also shows strong results for weakly and fully supervised scene graph generation. In addition, we explore an open-vocabulary setting for detecting scene graphs, and present the first result for open-set scene graph generation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据