Deep Reinforcement and Imitation Learning for Self-driving Tasks
dc.contributor.author | Hernández-García, Sergio | |
dc.contributor.author | Cuesta-Infante , Alfredo | |
dc.date.accessioned | 2025-01-30T12:41:05Z | |
dc.date.available | 2025-01-30T12:41:05Z | |
dc.date.issued | 2021-09-13 | |
dc.description | This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/978-3-030-85713-4_7. | |
dc.description.abstract | In this paper we train four different deep reinforcement and imitation learning agents on two self-driving tasks. The environment is a driving simulator in which the car is virtually equipped with a monocular RGB-D camera in the windshield, has a sensor in the speedometer and actuators in the brakes, accelerator and steering wheel. In the imitation learning framework, the human expert sees a photorealistic road and the speedometer, and acts with pedals and steering wheel. To be efficient, the state is a representation in the feature space extracted from the RGB images with a variational autoencoder, which is trained before running any simulation with a loss that attempts to reconstruct three images, the same RGB input, the depth image and the segmented image. | |
dc.identifier.citation | Hernández-García, S., Cuesta-Infante, A. (2021). Deep Reinforcement and Imitation Learning for Self-driving Tasks. In: Alba, E., et al. Advances in Artificial Intelligence. CAEPIA 2021. Lecture Notes in Computer Science(), vol 12882. Springer, Cham. https://doi.org/10.1007/978-3-030-85713-4_7 | |
dc.identifier.doi | 10.1007/978-3-030-85713-4_7 | |
dc.identifier.isbn | 978-3-030-85713-4 | |
dc.identifier.uri | https://hdl.handle.net/10115/71619 | |
dc.language.iso | en | |
dc.publisher | Springer | |
dc.rights | Attribution-NonCommercial-ShareAlike 4.0 International | en |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | |
dc.subject | Automobile steering equipment | |
dc.subject | Deep learning | |
dc.subject | Driving simulator | |
dc.subject | Driving tasks | |
dc.subject | Human expert | |
dc.subject | Imitation learning | |
dc.subject | Intelligent agents | |
dc.subject | Learning agents | |
dc.subject | Learning frameworks | |
dc.subject | Photo-realistic | |
dc.subject | Reinforcement learning | |
dc.subject | Self drivings | |
dc.subject | Self-driving | |
dc.subject | Steering wheel | |
dc.subject | Wheels | |
dc.title | Deep Reinforcement and Imitation Learning for Self-driving Tasks | |
dc.type | Book chapter |
Archivos
Bloque original
1 - 1 de 1
Cargando...
- Nombre:
- CAEPIA_LNAI_AM.pdf
- Tamaño:
- 2.74 MB
- Formato:
- Adobe Portable Document Format
- Descripción:
- In this paper we train four different deep reinforcement and imitation learning agents on two self-driving tasks. The environment is a driving simulator in which the car is virtually equipped with a monocular RGB-D camera in the windshield, has a sensor in the speedometer and actuators in the brakes, accelerator and steering wheel. In the imitation learning framework, the human expert sees a photorealistic road and the speedometer, and acts with pedals and steering wheel. To be efficient, the state is a representation in the feature space extracted from the RGB images with a variational autoencoder, which is trained before running any simulation with a loss that attempts to reconstruct three images, the same RGB input, the depth image and the segmented image.