3.8 Proceedings Paper

MVMO: A MULTI-OBJECT DATASET FOR WIDE BASELINE MULTI-VIEW SEMANTIC SEGMENTATION

出版社

IEEE
DOI: 10.1109/ICIP46576.2022.9897955

关键词

multi-view; cross-view; semantic segmentation; synthetic dataset

资金

  1. SPRI-Basque Government-ELKARTEK. Government of Spain-funded project [KK-2020/00049, KK-2021/00014]

向作者/读者索取更多资源

MVMO is a synthetic dataset with high object density and wide camera baselines, enabling research in multi-view semantic segmentation and cross-view semantic transfer. New research is needed to utilize the information from multi-view setups effectively.
We present MVMO (Multi-View, Multi-Object dataset): a synthetic dataset of 116,000 scenes containing randomly placed objects of 10 distinct classes and captured from 25 camera locations in the upper hemisphere. MVMO comprises photorealistic, path-traced image renders, together with semantic segmentation ground truth for every view. Unlike existing multi-view datasets, MVMO features wide baselines between cameras and high density of objects, which lead to large disparities, heavy occlusions and view-dependent object appearance. Single view semantic segmentation is hindered by self and inter-object occlusions that could benefit from additional viewpoints. Therefore, we expect that MVMO will propel research in multi-view semantic segmentation and cross-view semantic transfer. We also provide baselines that show that new research is needed in such fields to exploit the complementary information of multi-view setups(1).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据