Abstract

We present a method for estimating the intrinsic components of a dynamic scene captured with multi-view video sequences. Unlike previous work focused either on static scenes or single view videos, our method simultaneously addresses the challenges of dealing with the extra computational complexity given by the dynamic motion while enables novel view synthesis. Key to our method to make the output temporally consistent is to encode the temporal information in a latent embedding that leverages the redundant information of the dynamic scene. Our intrinsic components includes diffuse and specular albedo, as well as scene geometry and environment illumination. We explicitly account for light visibility, which we estimate efficiently by considering dynamic and static points separately, making the problem computationally tractable. We demonstrate the effectiveness of our approach through quantitative and qualitative experiments, showing that it outperforms the naïve per-frame decomposition approach in several real-world scene
Loading...

Quotes

3 appointments in WOS
0 citations in

Journal Title

Journal ISSN

Volume Title

Publisher

Elsevier

URL external

Date

Description

Citation

Mario Alfonso-Arsuaga, Jorge García-González, Andrea Castiella-Aguirrezabala, Miguel Andrés Alonso, Elena Garcés, DyNeRFactor: Temporally consistent intrinsic scene decomposition for dynamic NeRFs, Computers & Graphics, Volume 122, 2024, 103984, ISSN 0097-8493, https://doi.org/10.1016/j.cag.2024.103984

Endorsement

Review

Supplemented By

Referenced By

Statistics

Views
122
Downloads
104

Bibliographic managers

Document viewer

Select a file to preview:
Reload