Hernández-García, SergioCuesta-Infante , Alfredo2025-01-302025-01-302021-09-13Hernández-García, S., Cuesta-Infante, A. (2021). Deep Reinforcement and Imitation Learning for Self-driving Tasks. In: Alba, E., et al. Advances in Artificial Intelligence. CAEPIA 2021. Lecture Notes in Computer Science(), vol 12882. Springer, Cham. https://doi.org/10.1007/978-3-030-85713-4_7978-3-030-85713-4https://hdl.handle.net/10115/71619This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/978-3-030-85713-4_7.In this paper we train four different deep reinforcement and imitation learning agents on two self-driving tasks. The environment is a driving simulator in which the car is virtually equipped with a monocular RGB-D camera in the windshield, has a sensor in the speedometer and actuators in the brakes, accelerator and steering wheel. In the imitation learning framework, the human expert sees a photorealistic road and the speedometer, and acts with pedals and steering wheel. To be efficient, the state is a representation in the feature space extracted from the RGB images with a variational autoencoder, which is trained before running any simulation with a loss that attempts to reconstruct three images, the same RGB input, the depth image and the segmented image.enAttribution-NonCommercial-ShareAlike 4.0 Internationalhttp://creativecommons.org/licenses/by-nc-sa/4.0/Automobile steering equipmentDeep learningDriving simulatorDriving tasksHuman expertImitation learningIntelligent agentsLearning agentsLearning frameworksPhoto-realisticReinforcement learningSelf drivingsSelf-drivingSteering wheelWheelsDeep Reinforcement and Imitation Learning for Self-driving TasksBook chapter10.1007/978-3-030-85713-4_7info:eu-repo/semantics/openAccess