4.7 Article

PGNet: A Part-based Generative Network for 3D object reconstruction

期刊

KNOWLEDGE-BASED SYSTEMS
卷 194, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2020.105574

关键词

3D reconstruction; Point cloud generation; Part-based; Semantic reconstruction

资金

  1. National Science Foundation of China [61971363, 61701191]

向作者/读者索取更多资源

Deep-learning generative methods have developed rapidly. For example, various single- and multiview generative methods for meshes, voxels, and point clouds have been introduced. However, most 3D single-view reconstruction methods generate whole objects at one time, or in a cascaded way for dense structures, which misses local details of fine-grained structures. These methods are useless when the generative models are required to provide semantic information for parts. This paper proposes an efficient part-based recurrent generative network, which aims to generate object parts sequentially with the input of a single-view image and its semantic projection. The advantage of our method is its awareness of part structures; hence it generates more accurate models with fine-grained structures. Experiments show that our method attains high accuracy compared with other point set generation methods, particularly toward local details. (C) 2020 Published by Elsevier B.V.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据