Logotipo del repositorio
Comunidades
Todo DSpace
  • English
  • Español
Iniciar sesión
  1. Inicio
  2. Buscar por autor

Examinando por Autor "Pantrigo, Juan J."

Seleccione resultados tecleando las primeras letras
Mostrando 1 - 4 de 4
  • Resultados por página
  • Opciones de ordenación
  • Cargando...
    Miniatura
    Ítem
    Pedestrian detection with LeNet-like convolutional networks
    (Springer Nature, 2020-09) Cuesta-Infante, Alfredo; García, Francisco J.; Pantrigo, Juan J.; S. Montemayor, Antonio
    We present a detection method that is able to detect a learned target and is valid for both static and moving cameras. As an application, we detect pedestrians, but could be anything if there is a large set of images of it. The data set is fed into a number of deep convolutional networks, and then, two of these models are set in cascade in order to filter the cutouts of a multi-resolution window that scans the frames in a video sequence. We demonstrate that the excellent performance of deep convolutional networks is very difficult to match when dealing with real problems, and yet we obtain competitive results.
  • Cargando...
    Miniatura
    Ítem
    SCASA: From Synthetic to Real Computer-Aided Sperm Analysis
    (Springer, Cham, 2022-05-31) Hernández-Ferrándiz, Daniel; Pantrigo, Juan J.; Cabido, Raul
    Sperm analysis has a central role in diagnosing and treating infertility. Traditionally, assessment of sperm health was performed by an expert by viewing the sample through a microscope. In order to simplify this task and assist the expert, CASA (Computer-Assisted Sperm Analysis) systems were developed. These systems rely on low-level computer vision tasks such as classification, detection and tracking to analyze sperm health and motility. These tasks have been widely addressed in the literature, with some supervised approaches surpassing the human capacity to solve them. However, the accuracy of these models have not been directly translated into CASA systems. This is mainly due to the absence of labelled data, as well as the difficulty in obtaining it. In this work we propose the generation of synthetic semen samples to tackle the absence of labelled data. We propose a parametric modelling of spermatozoa, and show how models trained on synthetic data can be used on real images with no need of further fine-tuning stage.
  • Cargando...
    Miniatura
    Ítem
    The ETS2 Dataset, Synthetic Data from Video Games for Monocular Depth Estimation
    (Springer Nature, 2023-06-25) María-Arribas, David; Cuesta-Infante, Alfredo; Pantrigo, Juan J.
    In this work, we present a new dataset for monocular depth estimation created by extracting images, dense depth maps, and odometer data from a realistic video game simulation, Euro Truck Simulator 2. The dataset is used to train state-of-the-art depth estimation models in both supervised and unsupervised ways, which are evaluated against real-world sequences. Our results demonstrate that models trained exclusively with synthetic data achieve satisfactory performance in the real domain. The quantitative evaluation brings light to possible causes of domain gap in monocular depth estimation. Specifically, we discuss the effects of coarse-grained ground-truth depth maps in contrast to the fine-grained depth estimation. The dataset and code for data extraction and experiments are released open-source.
  • Cargando...
    Miniatura
    Ítem
    Visual classification of dumpsters with capsule networks
    (ACS, 2022) Garcia-Espinosa, Francisco J.; Concha, David; Pantrigo, Juan J.; Cuesta-Infante, Alfredo
    Garbage management is an essential task in the everyday life of a city. In many countries, dumpsters are owned and deployed by the public administration. An updated what-and-where list is in the core of the decision making process when it comes to remove or renew them. Moreover, it may give extra information to other analytics in a smart city context. In this paper, we present a capsule network-based architecture to automate the visual classification of dumpsters. We propose different network hyperparameter settings, such as reducing convolutional kernel size and increasing convolution layers. We also try several data augmentation strategies, as crop and flip image transformations. We succeed in reducing the number of network parameters by 85% with respect to the best previous method, thus decreasing the required training time and making the whole process suitable for low cost and embedded software architectures. In addition, the paper provides an extensive experimental analysis including an ablation study that illustrates the contribution of each component in the proposed method. Our proposal is compared with the state-of-the-art method, which is based on a Google Inception V3 architecture pretrained with Imagenet. Experimental results show that our proposal achieves a 95.35% accuracy, 2.35% over the previous best method.

© Universidad Rey Juan Carlos

  • Enviar Sugerencias