4.6 Article

Denoising Monte Carlo renderings via a multi-scale featured dual-residual GAN

Journal

VISUAL COMPUTER
Volume 37, Issue 9-11, Pages 2513-2525

Publisher

SPRINGER
DOI: 10.1007/s00371-021-02204-4

Keywords

Denoising Monte Carlo renderings; Generative adversarial networks; Multi-scale auxiliary features; Dual residual connections

Funding

  1. National Nature Science Foundation of China [61602088]
  2. Sichuan Provincial NSFC [2018JY0528]
  3. Fundamental Research Funds for the Central Universities [Y03019023601008011]
  4. interactive Technology Research Fund of the Research Center for Interactive Technology Industry, School of Economics and Management, Tsinghua University [RCITI2021T006]
  5. TiMi L1 Studio of Tencent corporation

Ask authors/readers for more resources

The paper proposes a novel GAN structure for denoising Monte Carlo renderings by utilizing dual residual connections and multi-scale auxiliary features extraction method. By employing spatial-adaptive blocks with deformable convolution, the network adapts to spatial texture and edge features variance, achieving better visual effects and quantitative metrics compared to previous methods.
Monte Carlo (MC) path tracing causes a lot of noise on the rendered image at a low samples per pixel. Recently, with the help of inexpensive auxiliary buffers and the generative adversarial network (GAN), deep learning-based denoising MC rendering methods have been able to generate noise-free images with high perceptual quality in seconds. In this paper, we propose a novel GAN structure for denoising Monte Carlo renderings, called dual residual connection GAN. Our key insight is that the dual residual connections can improve the chance of the optimal feature selection and implicitly increase the number of potential interactions between modules. We also propose a multi-scale auxiliary features extraction method, aiming to make full use of the rich geometry and texture information of auxiliary buffers. Moreover, we adopt a spatial-adaptive block with the deformable convolution to help the network adapt to the variance in spatial texture and edge features. Compared with the state-of-the-art methods, our network has fewer parameters and less inference time, and the results surpass the previous in terms of visual effects and quantitative metrics.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available