4.6 Article

Multi-view 3D shape style transformation

Journal

VISUAL COMPUTER
Volume 38, Issue 2, Pages 669-684

Publisher

SPRINGER
DOI: 10.1007/s00371-020-02042-w

Keywords

3D style transformation; Shape reconstruction; Shape modeling

Funding

  1. Fundamental Research Fund [DUT18RC(4)064]
  2. Natural Science Foundation of China (NSFC) [61762064]
  3. Jiangxi Science Fund for Distinguished Young Scholars [20192BCBL23001]

Ask authors/readers for more resources

This work introduces a neural network model based on multi-view representations for learning style transformation while preserving contents of 3D shapes. By learning style transformation between different style domains, the model preserves structural details of 3D shapes and outperforms baselines and state-of-the-art approaches in experiments.
It is a challenging task to transform style of 3D shapes for generating diverse outputs with learning-based methods. The reasons include two folds: (1) the lack of training data with different styles and (2) multi-modal information of 3D shapes which are hard to disentangle. In this work, a multi-view-based neural network model is proposed to learn style transformation while preserving contents of 3D shapes from unpaired domains. Given two sets of shapes in different style domains, such as Japanese chairs and Ming chairs, multi-view representations of each shape are calculated, and style transformation between these two sets is learnt based on these representations. This multi-view representation not only preserves the structural details of a 3D shape, but also ensures the richness of the training data. At test stage, transformed maps are generated with the trained network by the combination of the extracted style/content features from multi-view representation and new style features. Then, transformed maps are consolidated into a 3D point cloud by solving a domain-stability optimization problem. Depth maps from all viewpoints are fused to obtain a shape whose style is similar to the target shape. Experimental results demonstrate that the proposed method outperforms the baselines and state-of-the-art approaches on style transformation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available