Examinando por Autor "Vega, Julio"
Mostrando 1 - 14 de 14
- Resultados por página
- Opciones de ordenación
Ítem Basic human–robot interaction system running on an embedded platform(Elsevier, 2021-07-21) Vega, JulioRobotics will be a dominant area in society throughout future generations. Its presence is currently increasing in most daily life settings, with devices and mechanisms that facilitate the accomplishment of diverse tasks, as well as in work scenarios, where machines perform more and more jobs. This increase in the presence of autonomous robotic systems in society is due to their great efficiency and security compared to human capacity, which is thanks mainly to the enormous precision of their sensor and actuator systems. Among these, vision sensors are of the utmost importance. Humans and many animals naturally enjoy powerful perception systems, but, in robotics, this constitutes a constant line of research. In addition to having a high capacity for reasoning and decision-making, these robots incorporate important advances in their perceptual systems, allowing them to interact effectively in the working environments of this new industrial revolution. Drawing on the most basic interaction between humans, looking at the face, an innovative system is presented in this paper, which was developed for an autonomous and DIY robot. This system is composed of three modules. First, the face detection component, which detects human faces in the current image. Second, the scene representation algorithm, which offers a wider field of view than that of the single camera used, mounted on a servo-pan unit. Third, the active memory component, which was designed and implemented according to two competing dynamics: life and salience. The algorithm intelligently moves the servo-pan unit with the aim of finding new faces, follow existing ones and forgetting those that no longer appear on the scene. The system was developed and validated using a low-cost platform based on a Raspberry Pi3 board.Ítem Control System for Indoor Safety Measures Using a Faster R-CNN Architecture(MDPI, 2023-05-24) Vega, Julio: This paper presents a control system for indoor safety measures using a Faster R-CNN (Region-based Convolutional Neural Network) architecture. The proposed system aims to ensure the safety of occupants in indoor environments by detecting and recognizing potential safety hazards in real time, such as capacity control, social distancing, or mask use. Using deep learning techniques, the system detects these situations to be controlled, notifying the person in charge of the company if any of these are violated. The proposed system was tested in a real teaching environment at Rey Juan Carlos University, using Raspberry Pi 4 as a hardware platform together with an Intel Neural Stick board and a pair of PiCamera RGB (Red Green Blue) cameras to capture images of the environment and a Faster R-CNN architecture to detect and classify objects within the images. To evaluate the performance of the system, a dataset of indoor images was collected and annotated for object detection and classification. The system was trained using this dataset, and its performance was evaluated based on precision, recall, and F1 score. The results show that the proposed system achieved a high level of accuracy in detecting and classifying potential safety hazards in indoor environments. The proposed system includes an efficiently implemented software infrastructure to be launched on a low-cost hardware platform, which is affordable for any company, regardless of size or revenue, and it has the potential to be integrated into existing safety systems in indoor environments such as hospitals, warehouses, and factories, to provide real-time monitoring and alerts for safety hazards. Future work will focus on enhancing the system’s robustness and scalability to larger indoor environments with more complex safety hazards.Ítem Diseño Software (ejercicios) - Julio Vega(2024) Vega, JulioColección de ejercicios de programación relativos a los distintos temas de la asignatura.Ítem Diseño Software (slides) - Julio Vega(2024) Vega, JulioTransparencias (slides) de los temas de la asignatura, empleadas para acompañar las sesiones de explicación de los temas.Ítem Multisensory system for non-invasive monitoring and measuring of laboratory animal welfare(Taylor & Francis, 2024-03-21) Vega, Julio; Martínez, Javier; Verdú, CristinaThis paper presents a novel, cost-effective multisensory system designed for animal monitoring in research settings. The system aims to objectively assess animal welfare and discomfort during experiments, addressing the need for affordable monitoring solutions in research laboratories. Developed and validated in compliance with European regulations on animal experimentation and in accordance with the requirements of the Animal Research Centre at the University of Alcalá in Madrid. The system integrates Raspberry Pi 4 Model B and Arduino Uno boards with various sensors, including temperature, humidity, ammonia, airborne particles, and an RGB camera. A user-friendly web interface allows remote monitoring and management of the system. This innovation promises to improve the efficiency and feasibility of animal research, enabling more precise and ethical experimentation while advancing scientific knowledge and animal welfare.Ítem Open Vision System for Low-Cost Robotics Education(MDPI, 2019-11-06) Vega, Julio; Cañas, José M.Vision devices are currently one of the most widely used sensory elements in robots: commercial autonomous cars and vacuum cleaners, for example, have cameras. These vision devices can provide a great amount of information about robot surroundings. However, platforms for robotics education usually lack such devices, mainly because of the computing limitations of low cost processors. New educational platforms using Raspberry Pi are able to overcome this limitation while keeping costs low, but extracting information from the raw images is complex for children. This paper presents an open source vision system that simplifies the use of cameras in robotics education. It includes functions for the visual detection of complex objects and a visual memory that computes obstacle distances beyond the small field of view of regular cameras. The system was experimentally validated using the PiCam camera mounted on a pan unit on a Raspberry Pi-based robot. The performance and accuracy of the proposed vision system was studied and then used to solve two visual educational exercises: safe visual navigation with obstacle avoidance and person-following behavior.Ítem Open-source drone programming course for distance engineering education(MDPI, 2020-12-17) Cañas, José M.; Martín, Diego; Arias, Pedro; Vega, Julio; Roldán, David; García, Lía; Fernández, JesúsThis article presents a full course for autonomous aerial robotics inside the RoboticsAcademy framework. This “drone programming” course is open-access and ready-to-use for any teacher/student to teach/learn drone programming with it for free. The students may program diverse drones on their computers without a physical presence in this course. Unmanned aerial vehicles (UAV) applications are essentially practical, as their intelligence resides in the software part. Therefore, the proposed course emphasizes drone programming through practical learning. It comprises a collection of exercises resembling drone applications in real life, such as following a road, visual landing, and people search and rescue, including their corresponding background theory. The course has been successfully taught for five years to students from several university engineering degrees. Some exercises from the course have also been validated in three aerial robotics competitions, including an international one. RoboticsAcademy is also briefly presented in the paper. It is an open framework for distance robotics learning in engineering degrees. It has been designed as a practical complement to the typical online videos of massive open online courses (MOOCs). Its educational contents are built upon robot operating system (ROS) middleware (de facto standard in robot programming), the powerful 3D Gazebo simulator, and the widely used Python programming language. Additionally, RoboticsAcademy is a suitable tool for gamified learning and online robotics competitions, as it includes several competitive exercises and automatic assessment tools.Ítem PiBot: An Open Low-Cost Robotic Platform with Camera for STEM Education(MDPI, 2018-12-12) Vega, Julio; Cañas, José M.This paper presents a robotic platform, PiBot, which was developed to improve the teaching of robotics with vision to secondary students. Its computational core is the Raspberry Pi 3 controller board, and the greatest novelty of this prototype is the support developed for the powerful camera mounted on board, the PiCamera. An open software infrastructure written in Python language was implemented so that the student may use this camera as the main sensor of the robotic platform. Furthermore, higher-level commands were provided to enhance the learning outcome for beginners. In addition, a PiBot 3D printable model and the counterpart for the Gazebo simulator were also developed and fully supported. They are publicly available so that students and schools without the physical robot or that cannot afford to obtain one, can nevertheless practice, learn and teach Robotics using these open platforms: DIY-PiBot and/or simulated-PiBotÍtem PyBoKids: An Innovative Python-Based Educational Framework Using Real and Simulated Arduino Robots(MDPI, 2019-08-14) Vega, Julio; Cañas, José M.In western countries, robotics is becoming increasingly common in primary and secondary education, both as a specific discipline and a tool to make science, technology, engineering, and mathematics (STEM) subjects more appealing to children. The impact of robotics on society is also growing yearly, with new robotics applications in such things as autonomous cars, vacuum cleaners, and the area of logistics. In addition, the labor market is constantly demanding more professionals with robotics skills. This paper presents the PyBoKids framework for teaching robotics in secondary school, where its aim is to improve pre-university robotics education. It is based on the Python programming language and robots using an Arduino microprocessor. It includes a software infrastructure and a collection of practical exercises directed at pre-university students. The software infrastructure provides support for real and simulated robots. Moreover, we describe a pilot teaching project based on this framework, which was used by more than 2000 real students over the last two years.Ítem Reconfigurable computing for reactive robotics using open-source FPGAs(MDPI, 2021-12-22) Cañas, José M.; Fernández, Jesús; Vega, Julio; Ordóñez, JuanReconfigurable computing provides a paradigm to create intelligent systems different from the classic software computing approach. Instead of using a processor with an instruction set, a full stack of middleware, and an application program running on top, the field-programmable gate arrays (FPGAs) integrate a cell set that can be configured in different ways. A few vendors have dominated this market with their proprietary tools, hardware devices, and boards, resulting in fragmented ecosystems with few standards and little interoperation. However, a new and complete toolchain for FPGAs with its associated open tools has recently emerged from the open-source community. Robotics is an expanding application field that may definitely benefit from this revolution, as fast speed and low power consumption are usual requirements. This paper hypothesizes that basic reactive robot behaviors may be easily designed following the reconfigurable computing approach and the stateof-the-art open FPGA toolchain. They provide new abstractions such as circuit blocks and wires for building intelligent robots. Visual programming and block libraries make such development painless and reliable. As experimental validation, two reactive behaviors have been created in a real robot involving common sensors, actuators, and in-between logic. They have been also implemented using classic software programming for comparison purposes. Results are discussed and show that the development of reactive robot behaviors using reconfigurable computing and open tools is feasible, also achieving a high degree of simplicity and reusability, and benefiting from FPGAs’ low power consumption and time-critical responsiveness.Ítem Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory(MDPI, 2013-01-21) Vega, Julio; Perdices, Eduardo; Cañas, José MaríaCameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios.Ítem ROS System Facial Emotion Detection Using Machine Learning for a Low-Cost Robot Based on Raspberry Pi(MDPI, 2022-12-26) Martínez, Javier; Vega, JulioFacial emotion recognition (FER) is a field of research with multiple solutions in the stateof-the-art, focused on fields such as security, marketing or robotics. In the literature, several articles can be found in which algorithms are presented from different perspectives for detecting emotions. More specifically, in those emotion detection systems in the literature whose computational cores are low-cost, the results presented are usually in simulation or with quite limited real tests. This article presents a facial emotion detection system—detecting emotions such as anger, happiness, sadness or surprise—that was implemented under the Robot Operating System (ROS), Noetic version, and is based on the latest machine learning (ML) techniques proposed in the state-of-the-art. To make these techniques more efficient, and that they can be executed in real time on a low-cost board, extensive experiments were conducted in a real-world environment using a low-cost general purpose board, the Raspberry Pi 4 Model B. The final achieved FER system proposed in this article is capable of plausibly running in real time, operating at more than 13 fps, without using any external accelerator hardware, as other works (widely introduced in this article) do need in order to achieve the same purpose.Ítem Sensores y Actuadores (ejercicios) - Julio Vega(2024) Vega, JulioColección de ejercicios de programación y problemas desarrollados y relativos a los distintos temas de la asignatura.Ítem Sensores y Actuadores (slides) - Julio Vega(2024) Vega, JulioTransparencias (slides) de los temas de la asignatura, empleadas para acompañar las sesiones de explicación de los temas.