PERGAMO: Personalized 3D Garments from Monocular Video
Fecha
2023
Título de la revista
ISSN de la revista
Título del volumen
Editor
Wiley
Resumen
Clothing plays a fundamental role in digital humans. Current approaches to animate 3D garments are mostly based on realistic physics simulation, however, they typically suffer from two main issues: high computational run-time cost, which hinders their deployment; and simulation-to-real gap, which impedes the synthesis of specific real-world cloth samples. To circumvent both issues we propose PERGAMO, a data-driven approach to learn a deformable model for 3D garments from monocular images. To this end, we first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos. We use these 3D reconstructions to train a regression model that accurately predicts how the garment deforms as a function of the underlying body pose. We show that our method is capable of producing garment animations that match the real-world behavior, and generalizes to unseen body motions extracted from motion capture dataset.
Descripción
the Comunidad de Madrid in the framework of the Multiannual Agreement with the Universidad Rey Juan Carlos in line of Action 1, “Encouragement of research for young PhD”. Grant Number: CaptHuRe (M2736)
the Universidad Rey Juan Carlos through the Distinguished Researcher position INVESDIST-04. Grant Number: 17/12/2020
Leonardo Fellowship from the Fundación BBVA
Palabras clave
Citación
PERGAMO: Personalized 3D Garments from Monocular Video. Andrés Casado-Elvira, Marc Comino Trinidad, Dan Casasformat © 2023 The Authors. Computer Graphics Forum published by Eurographics - The European Association for Computer Graphics and John Wiley & Sons Ltd.https://doi.org/10.1111/cgf.14644
Colecciones
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución-NoComercial 4.0 Internacional