Towards Explainable Artificial Intelligence in Machine Learning: A study on efficient Perturbation-Based Explanations
Fecha
2025-09-01
Título de la revista
ISSN de la revista
Título del volumen
Editor
Elsevier
Enlace externo
Resumen
Explainable Artificial Intelligence (XAI) is critical for validating and trusting the decisions made by Machine Learning (ML) models, especially in high-stakes domains such as healthcare and finance. However, existing XAI methods often face significant computational challenges. To address this gap, this paper introduces a novel Perturbation-Based Explanation (PeBEx) method and comprehensively evaluates it versus local interpretable model-agnostic explanation approach (LIME) and SHapley Additive exPlanations (SHAP) across multiple datasets and ML models to assess explanation quality and computational efficiency. PeBEx leverages perturbation-based strategies to systematically alter input features and observe changes in model predictions to determine feature importance. This method not only offers superior computational efficiency, leading to scalability and efficiency for complex models on large datasets. Through testing on both synthetic and public datasets using eight ML models, we uncover the relative strengths and limitations of each XAI method in terms of explanation accuracy, fidelity, and computational demands. Our results show that while SHAP and LIME provide detailed explanations, they often suffer from high computational costs, particularly with complex models like Multi-Layer Perceptron (MLP). Conversely, PeBEx demonstrates superior efficiency and scalability, making it particularly suitable for applications that require rapid response times without compromising explanation quality. We conclude by proposing potential enhancements for PeBEx, including its adoption in a wider array of large-scale models. This study not only advances our understanding of the computational aspects of XAI but also proposes PeBEx as a viable solution for improving the efficiency, scalability, and applicability of explainability in ML.
Descripción
Partially funded by the Autonomous Community of Madrid (ELLIS Madrid Node). Also partially supported by project PID2022-140786NB-C32/AEI/10.13039/501100011033 (LATENTIA) from the Spanish Ministry of Science and Innovation, Spain. This work was supported by the CyberFold project, funded by the European Union through the NextGenerationEU instrument (Recovery, Transformation, and Resilience Plan), and managed by Instituto Nacional de Ciberseguridad de España (INCIBE) , under reference number ETD202300129.
Citación
Ismael Gómez-Talal, Mana Azizsoltani, Luis Bote-Curiel, José Luis Rojo-Álvarez, Ashok Singh, Towards Explainable Artificial Intelligence in Machine Learning: A study on efficient Perturbation-Based Explanations, Engineering Applications of Artificial Intelligence, Volume 155, 2025, 110664, ISSN 0952-1976, https://doi.org/10.1016/j.engappai.2025.110664
Colecciones

Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution 4.0 International