Enhancing end-to-end control in autonomous driving through kinematic-infused and visual memory imitation learning Author links open overlay panel

dc.contributor.authorPaniego, Sergio
dc.contributor.authorCalvo-Palomino, Roberto
dc.contributor.authorCañas, JoséMaría
dc.date.accessioned2024-07-17T09:24:23Z
dc.date.available2024-07-17T09:24:23Z
dc.date.issued2024
dc.descriptionThis work is supported by (GAIA) Gestión integral para la prevención, extinción y reforestación debido a incendios forestales, Spain, Proyectos de I+D en líneas estratégicas en colaboración entre organismos de investigación y difusión de conocimientos TRANSMISIONES 2023, Spain. Ref PLEC2023-010303 (2024–2026) by Agencia Estatal de Investigación de España, Spain .es
dc.description.abstractThis paper presents an exploration, study, and comparison of various alternatives to enhance the capabilities of an end-to-end control system for autonomous driving based on imitation learning by adding visual memory and kinematic input data to the deep learning architectures that govern the vehicle. The experimental comparison relies on fundamental error metrics (MAE, MSE) during the offline assessment, supplemented by several external complementary fine-grain metrics based on the behavior of the ego vehicle at several urban test scenarios in the CARLA reference simulator in the online evaluation. Our study focuses on a lane-following application using different urban scenario layouts and visual bird-eye-view input. The memory addition involves architectural modifications and different sensory input types. The kinematic data integration is managed with a modified input. The experiments encompass both typical driving scenarios and extreme never-seen conditions. Additionally, we conduct an ablation study examining various memory lengths and densities. We prove experimentally that incorporating visual memory capabilities and kinematic input data makes the driving system more robust and able to handle a wider range of challenging situations, including those not encountered during training, in terms of reduction of collisions and speed self-regulation, resulting in a 75% enhancement. All the work we present here, including model architectures, trained model weights, comparison tool, and the dataset, is open-source, facilitating replication and extension of our findings.es
dc.identifier.citationSergio Paniego, Roberto Calvo-Palomino, JoséMaría Cañas, Enhancing end-to-end control in autonomous driving through kinematic-infused and visual memory imitation learning, Neurocomputing, Volume 600, 2024, 128161, ISSN 0925-2312, https://doi.org/10.1016/j.neucom.2024.128161es
dc.identifier.doi10.1016/j.neucom.2024.128161es
dc.identifier.urihttps://hdl.handle.net/10115/38197
dc.language.isoenges
dc.publisherElsevieres
dc.rightsAtribución 4.0 Internacional*
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.titleEnhancing end-to-end control in autonomous driving through kinematic-infused and visual memory imitation learning Author links open overlay paneles
dc.typeinfo:eu-repo/semantics/articlees

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
1-s2.0-S0925231224009329-main.pdf
Tamaño:
1.71 MB
Formato:
Adobe Portable Document Format
Descripción:

Bloque de licencias

Mostrando 1 - 1 de 1
No hay miniatura disponible
Nombre:
license.txt
Tamaño:
2.67 KB
Formato:
Item-specific license agreed upon to submission
Descripción: