Abstract
We propose a new generative model for 3D garment deformations that enables us to learn, for the first time, a data-driven method for virtual try-on that effectively addresses garment-body collisions. In contrast to existing methods that require an undesirable postprocessing step to fix garment-body interpenetrations at test time, our approach directly outputs 3D garment configurations that do not collide with the underlying body. Key to our success is a new canonical space for garments that removes pose-and-shape deformations already captured by a new diffused human body model, which extrapolates body surface properties such as skinning weights and blendshapes to any 3D point. We leverage this representation to train a generative model with a novel self-supervised collision term that learns to reliably solve garment-body interpenetrations. We extensively evaluate and compare our results with recently proposed data-driven methods, and show that our method is the first to successfully address garment-body contact in unseen body shapes and motions, without compromising realism and detail.
Journal Title
Journal ISSN
Volume Title
Publisher
GMRV Publications
URL external
DOI
Date
Description
Keywords
Citation
GMRV Publications Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On Igor Santesteban, Nils Thürey, Miguel A. Otaduy, Dan Casas IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) - 2021 Images and movies
Collections
Endorsement
Review
Supplemented By
Referenced By
Document viewer
Select a file to preview:
Reload



