Examinando por Autor "Casas, Dan"
Mostrando 1 - 16 de 16
- Resultados por página
- Opciones de ordenación
Ítem 4D Model Flow: Precomputed Appearance Alignment for Real-time 4D Video Interpolation(Wiley, 2015-10-15) Hilton, Adrian; Christian, Theobalt; Collomosse, John; Richardt, Christian; Casas, DanWe introduce the concept of 4D model flow for the precomputed alignment of dynamic surface appearance across 4D video sequences of different motions reconstructed from multi-view video. Precomputed 4D model flow allows the efficient parametrization of surface appearance from the captured videos, which enables efficient real-time rendering of interpolated 4D video sequences whilst accurately reproducing visual dynamics, even when using a coarse underlying geometry. We estimate the 4D model flow using an image-based approach that is guided by available geometry proxies. We propose a novel representation in surface texture space for efficient storage and online parametric interpolation of dynamic appearance. Our 4D model flow overcomes previous requirements for computationally expensive online optical flow computation for data-driven alignment of dynamic surface appearance by precomputing the appearance alignment. This leads to an efficient rendering technique that enables the online interpolation between 4D videos in real time, from arbitrary viewpoints and with visual quality comparable to the state of the art.Ítem 4D Video Textures for Interactive Character Appearance(Wiley, 2014-05-01) Hilton, Adrian; Collomosse, John; Volino, Marco; Casas, Dan4D Video Textures (4DVT) introduce a novel representation for rendering video-realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio. 4D performance capture reconstructs dynamic shape and appearance over time but is limited to free-viewpoint video replay of the same motion. Interactive animation from 4D performance capture has so far been limited to surface shape only. 4DVT is the final piece in the puzzle enabling video-realistic interactive animation through two contributions: a layered view-dependent texture map representation which supports efficient storage, transmission and rendering from multiple view video capture; and a rendering approach that combines multiple 4DVT sequences in a parametric motion space, maintaining video quality rendering of dynamic surface appearance whilst allowing high-level interactive control of character motion and viewpoint. 4DVT is demonstrated for multiple characters and evaluated both quantitatively and through a user-study which confirms that the visual quality of captured video is maintained. The 4DVT representation achieves >90% reduction in size and halves the rendering cost.Ítem A Survey on Intrinsic Images: Delving Deep into Lambert and Beyond(Springer, 2022-02-02) Garces, Elena; Rodriguez-Pardo, Carlos; Casas, Dan; Lopez-Moreno, JorgeIntrinsic imaging or intrinsic image decomposition has traditionally been described as the problem of decomposing an image into two layers: a reflectance, the albedo invariant color of the material; and a shading, produced by the interaction between light and geometry. Deep learning techniques have been broadly applied in recent years to increase the accuracy of those separations. In this survey, we overview those results in context of well-known intrinsic image data sets and relevant metrics used in the literature, discussing their suitability to predict a desirable intrinsic image decomposition. Although the Lambertian assumption is still a foundational basis for many methods, we show that there is increasing awareness on the potential of more sophisticated physically-principled components of the image formation process, that is, optically accurate material models and geometry, and more complete inverse light transport estimations. We classify these methods in terms of the type of decomposition, considering the priors and models used, as well as the learning architecture and methodology driving the decomposition process. We also provide insights about future directions for research, given the recent advances in neural, inverse and differentiable rendering techniques.Ítem Animation Control of Surface Motion Capture(IEEE, 2013-12-06) Tejera, Margara; Casas, Dan; Hilton, AdrianSurface motion capture (SurfCap) of actor performance from multiple view video provides reconstruction of the natural nonrigid deformation of skin and clothing. This paper introduces techniques for interactive animation control of SurfCap sequences which allow the flexibility in editing and interactive manipulation associated with existing tools for animation from skeletal motion capture (MoCap). Laplacian mesh editing is extended using a basis model learned from SurfCap sequences to constrain the surface shape to reproduce natural deformation. Three novel approaches for animation control of SurfCap sequences, which exploit the constrained Laplacian mesh editing, are introduced: 1) space-time editing for interactive sequence manipulation; 2) skeleton-driven animation to achieve natural nonrigid surface deformation; and 3) hybrid combination of skeletal MoCap driven and SurfCap sequence to extend the range of movement. These approaches are combined with high-level parametric control of SurfCap sequences in a hybrid surface and skeleton-driven animation control framework to achieve natural surface deformation with an extended range of movement by exploiting existing MoCap archives. Evaluation of each approach and the integrated animation framework are presented on real SurfCap sequences for actors performing multiple motions with a variety of clothing styles. Results demonstrate that these techniques enable flexible control for interactive animation with the natural nonrigid surface dynamics of the captured performance and provide a powerful tool to extend current SurfCap databases by incorporating new motions from MoCap sequences.Ítem Fine Virtual Manipulation with Hands of Different Sizes(GMRV Publications, 2021) Sorli, Suzanne; Verschoor, Mickeal; Casas, Dan; Tajadura-Jiménez, Ana; Otaduy, Miguel A.Natural interaction with virtual objects relies on two major technology components: hand tracking and hand-object physics simulation. There are functional solutions for these two components, but their hand representations may differ in size and skeletal morphology, hence making the connection non-trivial. In this paper, we introduce a pose retargeting strategy to connect the tracked and simulated hand representations, and we have formulated and solved this hand retargeting as an optimization problem. We have also carried out a user study that demonstrates the effectiveness of our approach to enable fine manipulations that are slow and awkward with na¨ıve approaches.Ítem How Will It Drape Like? Capturing Fabric Mechanics from Depth Images(Wiley, 2023) Rodriguez-Pardo, Carlos; Prieto-Martin, Melania; Casas, Dan; Garces, ElenaWe propose a method to estimate the mechanical parameters of fabrics using a casual capture setup with a depth camera. Our approach enables to create mechanically-correct digital representations of real-world textile materials, which is a fundamental step for many interactive design and engineering applications. As opposed to existing capture methods, which typically require expensive setups, video sequences, or manual intervention, our solution can capture at scale, is agnostic to the optical appearance of the textile, and facilitates fabric arrangement by non-expert operators. To this end, we propose a sim-to-real strategy to train a learning-based framework that can take as input one or multiple images and outputs a full set of mechanical parameters. Thanks to carefully designed data augmentation and transfer learning protocols, our solution generalizes to real images despite being trained only on synthetic data, hence successfully closing the sim-to-real loop. Key in our work is to demonstrate that evaluating the regression accuracy based on the similarity at parameter space leads to an inaccurate distances that do not match the human perception. To overcome this, we propose a novel metric for fabric drape similarity that operates on the image domain instead on the parameter space, allowing us to evaluate our estimation within the context of a similarity rank. We show that out metric correlates with human judgments about the perception of drape similarity, and that our model predictions produce perceptually accurate results compared to the ground truth parameters.Ítem Interactive Animation of 4D Performance Capture(IEEE, 2012-11-30) Casas, Dan; Tejera, Margara; Jean-Yves, Guillemaut; Hilton, AdrianA 4D parametric motion graph representation is presented for interactive animation from actor performance capture in a multiple camera studio. The representation is based on a 4D model database of temporally aligned mesh sequence reconstructions for multiple motions. High-level movement controls such as speed and direction are achieved by blending multiple mesh sequences of related motions. A real-time mesh sequence blending approach is introduced, which combines the realistic deformation of previous nonlinear solutions with efficient online computation. Transitions between different parametric motion spaces are evaluated in real time based on surface shape and motion similarity. Four-dimensional parametric motion graphs allow real-time interactive character animation while preserving the natural dynamics of the captured performance.Ítem Learning Contact Corrections for Handle-Based Subspace Dynamics(ACM, 2021) Casas, Dan; Pérez, Jesús; Otaduy, Miguel A.; Romero, CristianThis paper introduces a novel subspace method for the simulation of dynamic deformations. The method augments existing linear handle-based subspace formulations with nonlinear learning-based corrections parameterized by the same subspace. Together, they produce a compact nonlinear model that combines the fast dynamics and overall contact-based interaction of subspace methods, with the highly detailed deformations of learning-based methods. We propose a formulation of the model with nonlinear corrections applied on the local undeformed setting, and decoupling internal and external contact-driven corrections. We define a simple mapping of these corrections to the global setting, an efficient implementation for dynamic simulation, and a training pipeline to generate examples that efficiently cover the interaction space. Altogether, the method achieves unprecedented combination of speed and contact-driven deformation detail.Ítem Learning-Based Animation of Clothing for Virtual Try-On(2020-04-17) Santesteban, Igor; Otaduy, Miguel A.; Casas, DanThis paper presents a learning-based clothing animation method for highly efficient virtual try-on simulation. Given a garment, we preprocess a rich database of physically-based dressed character simulations, for multiple body shapes and animations. Then, using this database, we train a learning-based model of cloth drape and wrinkles, as a function of body shape and dynamics. We propose a model that separates global garment fit, due to body shape, from local garment wrinkles, due to both pose dynamics and body shape. We use a recurrent neural network to regress garment wrinkles, and we achieve highly plausible nonlinear effects, in contrast to the blending artifacts suffered by previous methods. At runtime, dynamic virtual try-on animations are produced in just a few milliseconds for garments with thousands of triangles. We show qualitative and quantitative analysis of results.Ítem Modeling and Estimation of Nonlinear Skin Mechanics for Animated Avatars(2020-04-17) Romero, Cristian; Otaduy, Miguel A.; Casas, Dan; Perez, JesusData-driven models of human avatars have shown very accurate representations of static poses with soft-tissue deformations. However they are not yet capable of precisely representing very nonlinear deformations and highly dynamic effects. Nonlinear skin mechanics are essential for a realistic depiction of animated avatars interacting with the environment, but controlling physics-only solutions often results in a very complex parameterization task. In this work, we propose a hybrid model in which the soft-tissue deformation of animated avatars is built as a combination of a data-driven statistical model, which kinematically drives the animation, an FEM mechanical simulation. Our key contribution is the definition of deformation mechanics in a reference pose space by inverse skinning of the statistical model. This way, we retain as much as possible of the accurate static data-driven deformation and use a custom anisotropic nonlinear material to accurately represent skin dynamics. Model parameters including the heterogeneous distribution of skin thickness and material properties are automatically optimized from 4D captures of humans showing soft-tissue deformations.Ítem Parametric animation of performance-captured mesh sequences(Wiley, 2012-03-20) Casas, Dan; Tejera, Margara; Guillemaut, Jean-Yves; Hilton, AdrianIn this paper, we introduce an approach to high-level parameterisation of captured mesh sequences of actor performance for real-time interactive animation control. High-level parametric control is achieved by non-linear blending between multiple mesh sequences exhibiting variation in a particular movement. For example, walking speed is parameterised by blending fast and slow walk sequences. A hybrid non-linear mesh sequence blending approach is introduced to approxi- mate the natural deformation of non-linear interpolation techniques whilst maintaining the real-time performance of linear mesh blending. Quantitative results show that the hybrid approach gives an accurate real-time approximation of offline non-linear deformation. An evaluation of the approach shows good performance not only for entire meshes but also with specific mesh areas. Results are presented for single and multi-dimensional parametric control of walking (speed/direction), jumping (height/distance) and reaching (height) from captured mesh sequences. This approach allows continuous real-time control of high-level parameters such as speed and direction whilst maintaining the natural surface dynamics of captured movement.Ítem PERGAMO: Personalized 3D Garments from Monocular Video(Wiley, 2023) Casado-Elvira, Andrés; Comino Trinidad, Marc; Casas, DanClothing plays a fundamental role in digital humans. Current approaches to animate 3D garments are mostly based on realistic physics simulation, however, they typically suffer from two main issues: high computational run-time cost, which hinders their deployment; and simulation-to-real gap, which impedes the synthesis of specific real-world cloth samples. To circumvent both issues we propose PERGAMO, a data-driven approach to learn a deformable model for 3D garments from monocular images. To this end, we first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos. We use these 3D reconstructions to train a regression model that accurately predicts how the garment deforms as a function of the underlying body pose. We show that our method is capable of producing garment animations that match the real-world behavior, and generalizes to unseen body motions extracted from motion capture dataset.Ítem Real-time Pose and Shape Reconstruction of Two Interacting Hands With a Single Depth Camera(ACM Transactions on Graphics, 2019) Mueller, Franziska; Davis, Micah; Bernard, Florian; Sotnychenko, Oleksandr; Verschoor, Mickeal; Otaduy, Miguel A.; Casas, Dan; Theobalt, ChristianWepresentanovelmethodforreal-timeposeandshapereconstructionof twostronglyinteractinghands.Ourapproachisthefirsttwo-handtracking solutionthatcombinesanextensivelistoffavorableproperties,namelyitis marker-less,usesasingleconsumer-leveldepthcamera,runsinrealtime, handlesinter-andintra-handcollisions,andautomaticallyadjuststothe user’shandshape.Inordertoachievethis,weembedarecentparametric handposeandshapemodelandadensecorrespondencepredictorbasedon adeepneuralnetworkintoasuitableenergyminimizationframework.For trainingthecorrespondencepredictionnetwork,wesynthesizeatwo-hand dataset based on physical simulations that includes both hand pose and shapeannotationswhileatthesametimeavoidinginter-handpenetrations. Toachievereal-timerates,wephrasethemodelfittingintermsofanonlinear least-squaresproblemsothattheenergycanbeoptimizedbasedonahighly efficient GPU-based Gauss-Newton optimizer. We show state-of-the-art resultsinscenesthatexceedthecomplexityleveldemonstratedbypreviousÍtem Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On(GMRV Publications, 2021) Santesteban, Igor; Thuerey, Nils; Otaduy, Miguel A.; Casas, DanWe propose a new generative model for 3D garment deformations that enables us to learn, for the first time, a data-driven method for virtual try-on that effectively addresses garment-body collisions. In contrast to existing methods that require an undesirable postprocessing step to fix garment-body interpenetrations at test time, our approach directly outputs 3D garment configurations that do not collide with the underlying body. Key to our success is a new canonical space for garments that removes pose-and-shape deformations already captured by a new diffused human body model, which extrapolates body surface properties such as skinning weights and blendshapes to any 3D point. We leverage this representation to train a generative model with a novel self-supervised collision term that learns to reliably solve garment-body interpenetrations. We extensively evaluate and compare our results with recently proposed data-driven methods, and show that our method is the first to successfully address garment-body contact in unseen body shapes and motions, without compromising realism and detail.Ítem SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans(2020-04-17) Santesteban, Igor; Garces, Elena; Otaduy, Miguel A.; Casas, DanWe present SoftSMPL, a learning-based method to model realistic soft-tissue dynamics as a function of body shape and motion. Datasets to learn such task are scarce and expensive to generate, which makes training models prone to overfitting. At the core of our method there are three key contributions that enable us to model highly realistic dynamics and better generalization capabilities than state-of-the-art methods, while training on the same data. First, a novel motion descriptor that disentangles the standard pose representation by removing subject-specific features; second, a neural-network-based recurrent regressor that generalizes to unseen shapes and motions; and third, a highly efficient nonlinear deformation subspace capable of representing soft-tissue deformations of arbitrary shapes. We demonstrate qualitative and quantitative improvements over existing methods and, additionally, we show the robustness of our method on a variety of motion capture databases.Ítem Tactile Rendering Based on Skin Stress Optimization(Association for Computing Machinery (ACM), 2020) Verschoor, Mickeal; Casas, Dan; Otaduy, Miguel A.We present a method to render virtual touch, such that the stimulus produced by a tactile device on a user’s skin matches the stimulus computed in a virtual environment simulation. To achieve this, we solve the inverse mapping from skin stimulus to device configuration thanks to a novel optimization algorithm. Within this algorithm, we use a device-skin simulation model to estimate rendered stimuli, we account for trajectory-dependent effects efficiently by decoupling the computation of the friction state from the optimization of device configuration, and we accelerate computations using a neural-network approximation of the device-skin model. Altogether, we enable real-time tactile rendering of rich interactions including smooth rolling, but also contact with edges, or frictional stick-slip motion. We validate our algorithm both qualitatively through user experiments, and quantitatively on a BioTac biomimetic finger sensor.