4.6 Article

SFGAN: Unsupervised Generative Adversarial Learning of 3D Scene Flow from the 3D Scene Self

期刊

ADVANCED INTELLIGENT SYSTEMS
卷 4, 期 4, 页码 -

出版社

WILEY
DOI: 10.1002/aisy.202100197

关键词

3D point clouds; generative adversarial network; scene flow estimation; soft correspondence; unsupervised learning

资金

  1. Natural Science Foundation of China [62073222, U1913204]
  2. Shanghai Municipal Education Commission
  3. Shanghai Education Development Foundation [19SG08]
  4. Shenzhen Science and Technology Program [JSGG20201103094400002]
  5. Science and Technology Commission of Shanghai Municipality [21511101900]
  6. NVIDIA Corporation

向作者/读者索取更多资源

The study utilizes generative adversarial networks to self-learn 3D scene flow and discriminates between real and synthesized point clouds, achieving accurate estimation of scene flow.
Scene flow tracks the 3D motion of each point in adjacent point clouds. It provides fundamental 3D motion perception for autonomous driving and server robot. Although red green blue depth (RGBD) camera or light detection and ranging (LiDAR) capture discrete 3D points in space, the objects and motions usually are continuous in the macroworld. That is, the objects keep themselves consistent as they flow from the current frame to the next frame. Based on this insight, the generative adversarial networks (GAN) is utilized to self-learn 3D scene flow without ground truth. The fake point cloud is synthesized from the predicted scene flow and the point cloud of the first frame. The adversarial training of the generator and discriminator is realized through synthesizing indistinguishable fake point cloud and discriminating the real point cloud and the synthesized fake point cloud. The experiments on Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset show that our method realizes promising results. Just as human, the proposed method can identify the similar local structures of two adjacent frames even without knowing the ground truth scene flow. Then, the local correspondence can be correctly estimated, and further the scene flow is correctly estimated. An interactive preprint version of the article can be found here: .

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据