Deep Reinforcement and Imitation Learning for Self-driving Tasks

dc.contributor.authorHernández-García, Sergio
dc.contributor.authorCuesta-Infante , Alfredo
dc.date.accessioned2025-01-30T12:41:05Z
dc.date.available2025-01-30T12:41:05Z
dc.date.issued2021-09-13
dc.descriptionThis version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/978-3-030-85713-4_7.
dc.description.abstractIn this paper we train four different deep reinforcement and imitation learning agents on two self-driving tasks. The environment is a driving simulator in which the car is virtually equipped with a monocular RGB-D camera in the windshield, has a sensor in the speedometer and actuators in the brakes, accelerator and steering wheel. In the imitation learning framework, the human expert sees a photorealistic road and the speedometer, and acts with pedals and steering wheel. To be efficient, the state is a representation in the feature space extracted from the RGB images with a variational autoencoder, which is trained before running any simulation with a loss that attempts to reconstruct three images, the same RGB input, the depth image and the segmented image.
dc.identifier.citationHernández-García, S., Cuesta-Infante, A. (2021). Deep Reinforcement and Imitation Learning for Self-driving Tasks. In: Alba, E., et al. Advances in Artificial Intelligence. CAEPIA 2021. Lecture Notes in Computer Science(), vol 12882. Springer, Cham. https://doi.org/10.1007/978-3-030-85713-4_7
dc.identifier.doi10.1007/978-3-030-85713-4_7
dc.identifier.isbn978-3-030-85713-4
dc.identifier.urihttps://hdl.handle.net/10115/71619
dc.language.isoen
dc.publisherSpringer
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 Internationalen
dc.rights.accessRightsinfo:eu-repo/semantics/openAccess
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/
dc.subjectAutomobile steering equipment
dc.subjectDeep learning
dc.subjectDriving simulator
dc.subjectDriving tasks
dc.subjectHuman expert
dc.subjectImitation learning
dc.subjectIntelligent agents
dc.subjectLearning agents
dc.subjectLearning frameworks
dc.subjectPhoto-realistic
dc.subjectReinforcement learning
dc.subjectSelf drivings
dc.subjectSelf-driving
dc.subjectSteering wheel
dc.subjectWheels
dc.titleDeep Reinforcement and Imitation Learning for Self-driving Tasks
dc.typeBook chapter

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
CAEPIA_LNAI_AM.pdf
Tamaño:
2.74 MB
Formato:
Adobe Portable Document Format
Descripción:
In this paper we train four different deep reinforcement and imitation learning agents on two self-driving tasks. The environment is a driving simulator in which the car is virtually equipped with a monocular RGB-D camera in the windshield, has a sensor in the speedometer and actuators in the brakes, accelerator and steering wheel. In the imitation learning framework, the human expert sees a photorealistic road and the speedometer, and acts with pedals and steering wheel. To be efficient, the state is a representation in the feature space extracted from the RGB images with a variational autoencoder, which is trained before running any simulation with a loss that attempts to reconstruct three images, the same RGB input, the depth image and the segmented image.