Abstract

Explainable Artificial Intelligence (XAI) is critical for validating and trusting the decisions made by Machine Learning (ML) models, especially in high-stakes domains such as healthcare and finance. However, existing XAI methods often face significant computational challenges. To address this gap, this paper introduces a novel Perturbation-Based Explanation (PeBEx) method and comprehensively evaluates it versus local interpretable model-agnostic explanation approach (LIME) and SHapley Additive exPlanations (SHAP) across multiple datasets and ML models to assess explanation quality and computational efficiency. PeBEx leverages perturbation-based strategies to systematically alter input features and observe changes in model predictions to determine feature importance. This method not only offers superior computational efficiency, leading to scalability and efficiency for complex models on large datasets. Through testing on both synthetic and public datasets using eight ML models, we uncover the relative strengths and limitations of each XAI method in terms of explanation accuracy, fidelity, and computational demands. Our results show that while SHAP and LIME provide detailed explanations, they often suffer from high computational costs, particularly with complex models like Multi-Layer Perceptron (MLP). Conversely, PeBEx demonstrates superior efficiency and scalability, making it particularly suitable for applications that require rapid response times without compromising explanation quality. We conclude by proposing potential enhancements for PeBEx, including its adoption in a wider array of large-scale models. This study not only advances our understanding of the computational aspects of XAI but also proposes PeBEx as a viable solution for improving the efficiency, scalability, and applicability of explainability in ML.
Loading...

Quotes

3 appointments in WOS
0 citations in

Journal Title

Journal ISSN

Volume Title

Publisher

Elsevier

URL external

Description

Partially funded by the Autonomous Community of Madrid (ELLIS Madrid Node). Also partially supported by project PID2022-140786NB-C32/AEI/10.13039/501100011033 (LATENTIA) from the Spanish Ministry of Science and Innovation, Spain. This work was supported by the CyberFold project, funded by the European Union through the NextGenerationEU instrument (Recovery, Transformation, and Resilience Plan), and managed by Instituto Nacional de Ciberseguridad de España (INCIBE) , under reference number ETD202300129.

Citation

Ismael Gómez-Talal, Mana Azizsoltani, Luis Bote-Curiel, José Luis Rojo-Álvarez, Ashok Singh, Towards Explainable Artificial Intelligence in Machine Learning: A study on efficient Perturbation-Based Explanations, Engineering Applications of Artificial Intelligence, Volume 155, 2025, 110664, ISSN 0952-1976, https://doi.org/10.1016/j.engappai.2025.110664

Endorsement

Review

Supplemented By

Referenced By

Statistics

Views
9
Downloads
48

Bibliographic managers

Document viewer

Select a file to preview:
Reload