期刊
ACM TRANSACTIONS ON GRAPHICS
卷 41, 期 4, 页码 -出版社
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3528223.3530127
关键词
Image Synthesis; Neural Networks; Encodings; Hashing; GPUs; Parallel Computation; Function Approximation
The research introduces a versatile new input encoding that allows for a reduction in the cost of training and evaluation by using a multiresolution hash table in neural networks.
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920x1080.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据