Examinando por Autor "Azizsoltani, Mana"
Mostrando 1 - 2 de 2
- Resultados por página
- Opciones de ordenación
Ítem Machine Learning in Hospitality: Interpretable Forecasting of Booking Cancellations(Institute of Electrical and Electronics Engineers, 2025-01-29) Gómez-Talal, Ismael; Azizsoltani, Mana; Talón-Ballestero, Pilar; Singh, AshokThe phenomenon of cancellations in hotel bookings is one of the main pain points in the hospitality sector as it skews demand signals and can result in revenue losses estimated at about 20 %. Yet, forecasting booking cancellations remains an underresearched area, particularly in the understanding of the behavioral drivers of cancellations. This paper addresses this gap by proposing a new approach to predicting hotel booking cancellations rooted in stacked generalization and Explainable Artificial Intelligence (XAI). Specifically, the combination of linear, tree-based, non-linear and deep learning models into a single meta-model resulted in an increased accuracy rate to 96 %. In addition, this work focuses on interpretability, identifying the driving behavioral factors of cancellation as location, type of room, and customer segments. This approach can provide hoteliers with both highly accurate predictions as well as marketing intelligence that would allow them to drive strategy to minimize loss resulting from cancellations. The results of the research provide an effective solution to the challenges involved in forecasting booking cancellations, balancing forecast prediction accuracy with the ability to provide actionable insights.Ítem Towards Explainable Artificial Intelligence in Machine Learning: A study on efficient Perturbation-Based Explanations(Elsevier, 2025-09-01) Gómez-Talal, Ismael; Azizsoltani, Mana; Bote-Curiel, Luis; Rojo-Álvarez, José Luis; Singh, AshokExplainable Artificial Intelligence (XAI) is critical for validating and trusting the decisions made by Machine Learning (ML) models, especially in high-stakes domains such as healthcare and finance. However, existing XAI methods often face significant computational challenges. To address this gap, this paper introduces a novel Perturbation-Based Explanation (PeBEx) method and comprehensively evaluates it versus local interpretable model-agnostic explanation approach (LIME) and SHapley Additive exPlanations (SHAP) across multiple datasets and ML models to assess explanation quality and computational efficiency. PeBEx leverages perturbation-based strategies to systematically alter input features and observe changes in model predictions to determine feature importance. This method not only offers superior computational efficiency, leading to scalability and efficiency for complex models on large datasets. Through testing on both synthetic and public datasets using eight ML models, we uncover the relative strengths and limitations of each XAI method in terms of explanation accuracy, fidelity, and computational demands. Our results show that while SHAP and LIME provide detailed explanations, they often suffer from high computational costs, particularly with complex models like Multi-Layer Perceptron (MLP). Conversely, PeBEx demonstrates superior efficiency and scalability, making it particularly suitable for applications that require rapid response times without compromising explanation quality. We conclude by proposing potential enhancements for PeBEx, including its adoption in a wider array of large-scale models. This study not only advances our understanding of the computational aspects of XAI but also proposes PeBEx as a viable solution for improving the efficiency, scalability, and applicability of explainability in ML.