3.8 Proceedings Paper

Probabilistic Embeddings for Cross-Modal Retrieval

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.00831

关键词

-

向作者/读者索取更多资源

Cross-modal retrieval methods aim to build a common representation space for samples from different modalities, such as vision and language. This paper introduces Probabilistic Cross-Modal Embedding (PCME) to represent samples as probabilistic distributions, showing improved retrieval performance and providing uncertainty estimates for better interpretability. By evaluating on the CUB dataset with exhaustive annotations, PCME outperforms deterministic methods in capturing one-to-many correspondences.
Cross-modal retrieval methods build a common representation space for samples from multiple modalities, typically from the vision and the language domains. For images and their captions, the multiplicity of the correspondences makes the task particularly challenging. Given an image (respectively a caption), there are multiple captions (respectively images) that equally make sense. In this paper, we argue that deterministic functions are not sufficiently powerful to capture such one-to-many correspondences. Instead, we propose to use Probabilistic Cross-Modal Embedding (PCME), where samples from the different modalities are represented as probabilistic distributions in the common embedding space. Since common benchmarks such as COCO suffer from non-exhaustive annotations for cross-modal matches, we propose to additionally evaluate retrieval on the CUB dataset, a smaller yet clean database where all possible image-caption pairs are annotated. We extensively ablate PCME and demonstrate that it not only improves the retrieval performance over its deterministic counterpart but also provides uncertainty estimates that render the embeddings more interpretable.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据