Journal
COMPUTER GRAPHICS FORUM
Volume 41, Issue 1, Pages 453-464Publisher
WILEY
DOI: 10.1111/cgf.14446
Keywords
image processing; image and video processing
Categories
Funding
- European Union's Horizon 2020 research and innovation program, through the CHAMELEON project (European Research Council) [682080]
- DyViTo project (MSCA-ITN) [765121]
- PRIME project (MSCA-ITN) [956585]
Ask authors/readers for more resources
This article presents an image-based editing method that modifies the material appearance of an object by changing high-level perceptual attributes. The method utilizes a two-step generative network to drive appearance changes and generate images with high-frequency details. To train the network, the researchers augmented an existing material appearance dataset with perceptual judgments of high-level attributes obtained through crowd-sourced experiments, and employed training strategies that avoided the need for original-edited image pairs. The perception of appearance in the edited images was validated through a user study.
Single-image appearance editing is a challenging task, traditionally requiring the estimation of additional scene properties such as geometry or illumination. Moreover, the exact interaction of light, shape and material reflectance that elicits a given perceptual impression is still not well understood. We present an image-based editing method that allows to modify the material appearance of an object by increasing or decreasing high-level perceptual attributes, using a single image as input. Our framework relies on a two-step generative network, where the first step drives the change in appearance and the second produces an image with high-frequency details. For training, we augment an existing material appearance dataset with perceptual judgements of high-level attributes, collected through crowd-sourced experiments, and build upon training strategies that circumvent the cumbersome need for original-edited image pairs. We demonstrate the editing capabilities of our framework on a variety of inputs, both synthetic and real, using two common perceptual attributes (Glossy and Metallic), and validate the perception of appearance in our edited images through a user study.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available