期刊
COMPUTER GRAPHICS FORUM
卷 39, 期 4, 页码 35-45出版社
WILEY
DOI: 10.1111/cgf.14052
关键词
-
资金
- NSF [1617234, 1703957, 1764078]
- ONR [N000141712687]
- Ronald L. Graham Chair
- Google Fellowship
- UC San Diego Center for Visual Computing
- Adobe Fellowship
- Div Of Information & Intelligent Systems
- Direct For Computer & Info Scie & Enginr [1764078] Funding Source: National Science Foundation
Recently, deep learning-based denoising approaches have led to dramatic improvements in low sample-count Monte Carlo rendering. These approaches are aimed at path tracing, which is not ideal for simulating challenging light transport effects like caustics, where photon mapping is the method of choice. However, photon mapping requires very large numbers of traced photons to achieve high-quality reconstructions. In this paper, we develop the first deep learning-based method for particle-based rendering, and specifically focus on photon density estimation, the core of all particle-based methods. We train a novel deep neural network to predict a kernel function to aggregate photon contributions at shading points. Our network encodes individual photons into per-photon features, aggregates them in the neighborhood of a shading point to construct a photon local context vector, and infers a kernel function from the per-photon and photon local context features. This network is easy to incorporate in many previous photon mapping methods (by simply swapping the kernel density estimator) and can produce high-quality reconstructions of complex global illumination effects like caustics with an order of magnitude fewer photons compared to previous photon mapping methods. Our approach largely reduces the required number of photons, significantly advancing the computational efficiency in photon mapping.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据