DyNeRFactor: Temporally consistent intrinsic scene decomposition for dynamic NeRFs

Resumen

We present a method for estimating the intrinsic components of a dynamic scene captured with multi-view video sequences. Unlike previous work focused either on static scenes or single view videos, our method simultaneously addresses the challenges of dealing with the extra computational complexity given by the dynamic motion while enables novel view synthesis. Key to our method to make the output temporally consistent is to encode the temporal information in a latent embedding that leverages the redundant information of the dynamic scene. Our intrinsic components includes diffuse and specular albedo, as well as scene geometry and environment illumination. We explicitly account for light visibility, which we estimate efficiently by considering dynamic and static points separately, making the problem computationally tractable. We demonstrate the effectiveness of our approach through quantitative and qualitative experiments, showing that it outperforms the naïve per-frame decomposition approach in several real-world scene

Descripción

Citación

Mario Alfonso-Arsuaga, Jorge García-González, Andrea Castiella-Aguirrezabala, Miguel Andrés Alonso, Elena Garcés, DyNeRFactor: Temporally consistent intrinsic scene decomposition for dynamic NeRFs, Computers & Graphics, Volume 122, 2024, 103984, ISSN 0097-8493, https://doi.org/10.1016/j.cag.2024.103984
license logo
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 4.0 Internacional