期刊
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
卷 25, 期 4, 页码 1636-1650出版社
IEEE COMPUTER SOC
DOI: 10.1109/TVCG.2018.2816059
关键词
Volume rendering; generative models; deep learning; generative adversarial networks
资金
- US National Science Foundation [IIS-1654221]
- Direct For Computer & Info Scie & Enginr [1314813] Funding Source: National Science Foundation
- Direct For Computer & Info Scie & Enginr
- Div Of Information & Intelligent Systems [1314896] Funding Source: National Science Foundation
- Div Of Information & Intelligent Systems [1314813] Funding Source: National Science Foundation
- Div Of Information & Intelligent Systems
- Direct For Computer & Info Scie & Enginr [1654221] Funding Source: National Science Foundation
We present a technique to synthesize and analyze volume-rendered images using generative models. We use the Generative Adversarial Network (GAN) framework to compute a model from a large collection of volume renderings, conditioned on (1) viewpoint and (2) transfer functions for opacity and color. Our approach facilitates tasks for volume analysis that are challenging to achieve using existing rendering techniques such as ray casting or texture-based methods. We show how to guide the user in transfer function editing by quantifying expected change in the output image. Additionally, the generative model transforms transfer functions into a view-invariant latent space specifically designed to synthesize volume-rendered images. We use this space directly for rendering, enabling the user to explore the space of volume-rendered images. As our model is independent of the choice of volume rendering process, we show how to analyze volume-rendered images produced by direct and global illumination lighting, for a variety of volume datasets.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据