Mobile Robot Path Planning Using a QAPF Learning Algorithm for Known and Unknown Environments
Fecha
2022-08-08
Título de la revista
ISSN de la revista
Título del volumen
Editor
IEEE
Enlace externo
Resumen
This paper presents the computation of feasible paths for mobile robots in known and unknown environments using a QAPF learning algorithm. Q-learning is a reinforcement learning algorithm that has increased in popularity in mobile robot path planning in recent times, due to its self-learning capability without requiring a priori model of the environment. However, Q-learning shows slow convergence to the optimal solution, notwithstanding such an advantage. To address this limitation, the concept of partially guided Q-learning is employed wherein, the arti cial potential eld (APF) method is utilized to improve the classical Q-learning approach. Therefore, the proposed QAPF learning algorithm for path planning can enhance learning speed and improve nal performance using the combination of Q-learning and the APF method. Criteria used to measure planning effectiveness include path length, path smoothness, and learning time. Experiments demonstrate that the QAPF algorithm successfully achieves better learning values that outperform the classical Q-learning approach in all the test environments presented in terms of the criteria mentioned above in of ine and online path planning modes. The QAPF learning algorithm reached an improvement of 18.83% in path length for the online mode, an improvement of 169.75% in path smoothness for the of ine mode, and an improvement of 74.84% in training time over the classical approach.
Descripción
Palabras clave
Citación
U. Orozco-Rosas, K. Picos, J. J. Pantrigo, A. S. Montemayor and A. Cuesta-Infante, "Mobile Robot Path Planning Using a QAPF Learning Algorithm for Known and Unknown Environments," in IEEE Access, vol. 10, pp. 84648-84663, 2022
Colecciones

Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution 4.0 International