Examinando por Autor "Comino-Trinidad, Marc"
Mostrando 1 - 4 de 4
- Resultados por página
- Opciones de ordenación
Ítem 3DGStrands: Personalized 3D Gaussian splatting for realistic hair representation and animation(Elsevier, 2025-10) Dominguez-Elvira, Henar; Alfonso-Arsuaga, Mario; Barrueco-Garcia, Ana; Comino-Trinidad, MarcWe introduce a novel method for generating a personalized 3D Gaussian Splatting (3DGS) hair representation from an unorganized set of photographs. Our approach begins by leveraging an out-of-the-shelf method to estimate a strand-organized point cloud representation of the hair. This point cloud serves as the foundation for constructing a 3DGS model that accurately preserves the hair’s geometric structure while visually fitting the appearance in the photographs. Our model seamlessly integrates with the standard 3DGS rendering pipeline, enabling efficient volumetric rendering of complex hairstyles. Furthermore, we demonstrate the versatility of our approach by applying the Material Point Method (MPM) to simulate realistic hair physics directly on the 3DGS model, achieving lifelike hair animation. To the best of our knowledge, this is the first method to simulate hair dynamics within a 3DGS model. This work paves the way for future research that can leverage the flexible nature of 3DGS to fit more complex hair material models or enable physics properties estimation through dynamic tracking.Ítem Accurate hand contact detection from RGB images via image-to-image translation(Elsevier, 2025-05) Sorli, Suzanne; Comino-Trinidad, Marc; Casas, DanHand tracking is a growing research field that can potentially provide a natural interface to interact with virtual environments. However, despite the impressive recent advances, the 3D tracking of two interacting hands from RGB video remains an open problem. While current methods are able to infer the 3D pose of two hands in interaction reasonably, residual errors in depth, shape, and pose estimation prevent the accurate detection of hand-to-hand contact. To mitigate these errors, in this paper, we propose an image-based data-driven method to estimate the contact in hand-to-hand interactions. Our method is built on top of 3D hand trackers that predict the articulated pose of two hands, enriching them with camera-space probability maps of contact points. To train our method, we first feed motion capture data of interacting hands into a physics-based hand simulator, and compute dense 3D contact points. We then render such contact maps from various viewpoints and create a dataset of pairs of pixel-to-surface hand images and their corresponding contact labels. Finally, we train an image-to-image network that learns to translate pixel-to-surface correspondences to contact maps. At inference time, we estimate pixel-to-surface correspondences using state-of-the-art hand tracking and then use our network to predict accurate hand-to-hand contact. We qualitatively and quantitatively validate our method in real-world data and demonstrate that our contact predictions are more accurate than state-of-the-art hand-tracking methods.Ítem Detecting anomalies in dense 3D crowds(Elsevier, 2025-08) Prieto-Martín, Melania; Comino-Trinidad, Marc; Casas, DanEstimating the behavior of dense 3D crowds is crucial for applications in security, surveillance, and planning. Detecting events in such crowds from a single video, the most common scenario, is challenging due to ambiguities, occlusions, and complex human behavior. To address this, we propose a method that overlays pixel-based labels on video data to highlight anomalies in dense 3D crowds movement. Our key contribution is a data-driven, image-based model trained on features derived from 3D virtual crowd animations of articulated characters that mimic real crowds at a micro-level. By using training data based on captured dense crowd trajectories and realistic 3D motions, we can analyze and detect anomalies in complex real-world scenarios. Additionally, while acquiring ground-truth data from diverse viewpoints is difficult in real-world settings, our virtual simulator allows rendering scenes from multiple perspectives, enabling the training of models robust to viewpoint variations. We demonstrate qualitatively and quantitatively that our method can detect anomalies in much denser crowds than existing methods.Ítem SMPLitex: A Generative Model and Dataset for 3D Human Texture Estimation from Single Image(British Machine Vision Association, 2023) Casas , Dan; Comino-Trinidad, MarcWe propose SMPLitex, a method for estimating and manipulating the complete 3D appearance of humans captured from a single image. SMPLitex builds upon the recently proposed generative models for 2D images, and extends their use to the 3D domain through pixel-to-surface correspondences computed on the input image. To this end, we first train a generative model for complete 3D human appearance, and then fit it into the input image by conditioning the generative model to the visible parts of subject. Furthermore, we propose a new dataset of high-quality human textures built by sampling SMPLitex conditioned on subject descriptions and images. We quantitatively and qualitatively evaluate our method in 3 publicly available datasets, demonstrating that SMPLitex significantly outperforms existing methods for human texture estimation while allowing for a wider variety of tasks such as editing, synthesis, and manipulation.