Neural Networks for Digital Materials and Radiance Encoding
Fecha
2023
Título de la revista
ISSN de la revista
Título del volumen
Editor
Universidad Rey Juan Carlos
Resumen
Realistic virtual scenes are becoming increasingly prevalent in our society, with a wide range of applications
in areas such as manufacturing, architecture, fashion design, and entertainment, including
movies, video games, and augmented and virtual reality. Generating realistic images of such scenes
requires highly accurate illumination, geometry, and material models, which can be time-consuming
and challenging to obtain. Traditionally, such models have often been created manually by skilled
artists, but this process can be prohibitively time-consuming and costly. Alternatively, real-world
examples can be captured, but this approach presents additional challenges in terms of accuracy and
scalability. Moreover, while realism and accuracy are crucial in such processes, rendering efficiency
is also a key requirement, so that lifelike images can be generated with the speed required in many
real-world applications. One of the most significant challenges in this regard is the acquisition and
representation of materials, which are a critical component of our visual world and, by extension, of
virtual representations of it. However, existing approaches for material acquisition and representation
are limited in terms of efficiency and accuracy, which limits their real-world impact. To address these
challenges, data-driven approaches that leverage machine learning may provide viable solutions.
Nevertheless, designing and training machine learning models that meet all these competing requirements
remains a challenging task, requiring careful consideration of trade-offs between quality and
efficiency.
In this thesis, we propose novel learning-based solutions to address several key challenges in physicallybased
rendering and material digitization. Our approach leverages various forms of neural networks
to introduce innovative algorithms for radiance encoding, digital material generation, edition, and
estimation. First, we present a visual attribute transfer framework for digital materials that can
effectively generalize to new illumination conditions and geometric distortions. We showcase a
use-case of this method for high-resolution material acquisition using a custom device. Additionally,
we propose a generative model capable of synthesizing tileable textures from a single input image,
which helps improve the quality of material rendering. Building upon recent work in neural fields, we
also introduce a material representation that accurately encodes material reflectance while offering
powerful editing and propagation capabilities. In addition to reflectance, we present a novel method
for global illumination encoding that leverages carefully designed generative models to achieve
significantly faster sampling than previous work. Finally, we propose two innovative methods for
low-cost material digitization. With flatbed scanners as our capture device, we present a generative
model that can provide high-resolution material reflectance estimations using a single image as input,
while introducing an uncertainty quantification algorithm that increases its reliability and efficiency.
Additionally, we present a novel method for digitizing fabric mechanical properties using depth
images as input, which we extend with a perceptually-validated drape similarity metric. Overall, the
contributions of this thesis represent significant advances in the fields of radiance encoding and digital
material acquisition and edition, enhancing the quality, scalability, and efficiency of physically-based
rendering pipelines.
Descripción
Tesis Doctoral leída en la Universidad Rey Juan Carlos de Madrid en 2023. Supervisor:
Elena Garcés García
Palabras clave
Citación
Colecciones
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional