Twinenet: coupling features for synthesizing volume rendered images via convolutional encoder–decoders and multilayer perceptrons (bibtex)
by Shengzhou Luo, Jingxing Xu, John Dingliana, Mingqiang Wei, Lu Han, Lewei He and Jiahui Pan
Abstract:
Volume visualization plays a crucial role in both academia and industry, as volumetric data is extensively utilized in fields such as medicine, geosciences, and engineering. Addressing the complexities of volume rendering, neural rendering has emerged as a potential solution, facilitating the production of high-quality volume rendered images. In this paper, we propose TwineNet, a neural network architecture specifically designed for volume rendering. TwineNet combines features extracted from volume data, transfer functions, and viewpoints by utilizing twining skip connections across multiple feature layers. Building upon the TwineNet architecture, we introduce two neural networks, VolTFNet and PosTFNet, which leverage convolutional encoder–decoders and multilayer perceptrons to synthesize volume rendered images with novel transfer functions and viewpoints. Our experimental findings demonstrate the superiority of our models compared to state-of-the-art approaches in generating high-quality volume rendered images with novel transfer functions and viewpoints. This research contributes to advancing the field of volume rendering and showcases the potential of neural rendering techniques in scientific visualization.
Reference:
Twinenet: coupling features for synthesizing volume rendered images via convolutional encoder–decoders and multilayer perceptrons. Shengzhou Luo, Jingxing Xu, John Dingliana, Mingqiang Wei, Lu Han, Lewei He and Jiahui Pan. In The Visual Computer. 2024.
Bibtex Entry:
@Article{Luo2024,
author={Shengzhou Luo and Jingxing Xu and John Dingliana and Mingqiang Wei and Lu Han and Lewei He and Jiahui Pan},
title={Twinenet: coupling features for synthesizing volume rendered images via convolutional encoder--decoders and multilayer perceptrons},
journal={The Visual Computer},
year={2024},
month={Apr},
day={12},
abstract={Volume visualization plays a crucial role in both academia and industry, as volumetric data is extensively utilized in fields such as medicine, geosciences, and engineering. Addressing the complexities of volume rendering, neural rendering has emerged as a potential solution, facilitating the production of high-quality volume rendered images. In this paper, we propose TwineNet, a neural network architecture specifically designed for volume rendering. TwineNet combines features extracted from volume data, transfer functions, and viewpoints by utilizing twining skip connections across multiple feature layers. Building upon the TwineNet architecture, we introduce two neural networks, VolTFNet and PosTFNet, which leverage convolutional encoder--decoders and multilayer perceptrons to synthesize volume rendered images with novel transfer functions and viewpoints. Our experimental findings demonstrate the superiority of our models compared to state-of-the-art approaches in generating high-quality volume rendered images with novel transfer functions and viewpoints. This research contributes to advancing the field of volume rendering and showcases the potential of neural rendering techniques in scientific visualization.},
issn={1432-2315},
doi={10.1007/s00371-024-03368-5},
url={https://doi.org/10.1007/s00371-024-03368-5}
}
Powered by bibtexbrowser