Evaluating the use of large language models in programming courses: a comparative study
Archivos
Fecha
2025-07
Autores
Título de la revista
ISSN de la revista
Título del volumen
Editor
IATED
Resumen
Artificial intelligence (AI), particularly large language models like ChatGPT and Copilot, is reshaping programming education by expanding the sources students use for assistance. As these tools become more common, educators face the task of integrating them in ways that enhance learning outcomes.
This pilot study explored the impact of AI tools compared to traditional resources in an undergraduate programming course. Students were divided into two groups to complete coding exercises during a single session. Learning outcomes were assessed through pre- and post-tests, and self-report measures captured students’ self-perceived competence and resource preferences.
Results showed that the AI-assisted group achieved higher learning gains and completed tasks 16% faster on average. These students also reported greater satisfaction and perceived usefulness, with a preference for AI tools over other support resources —although course materials remained the most valued. The findings underscore AI’s potential to enhance programming education when used to assist essential problem-solving skills.
Descripción
Palabras clave
Citación
J. Beltrán, E. Veiga-Zarza (2025) Evaluating the use of large language models in programming courses: a comparative study, EDULEARN25 Proceedings, pp. 1761-1768.