Abstract
Explainable Machine Learning (ML) is an emerging field of Artificial Intelligence that has gained popularity in the last decade. It focuses on explaining ML models and their predictions, enabling people to understand the rationale behind them. Counterfactuals and semifactuals are two instances of Explainable ML techniques that explain model predictions using other observations. These techniques are based on the comparison between the observation to be explained and another one. In counterfactuals, their prediction is different, and in semifactuals, it is the same. Both techniques have been studied in the Social Sciences and Explainable ML communities, and they have different use cases and properties. In this paper, the Explanation Set framework, an approach that unifies counterfactuals and semifactuals, is introduced. Explanation Sets are example-based explanations defined in a neighborhood where most observations satisfy a grouping measure. The neighborhood allows defining and combining restrictions. The grouping measure determines if the explanations are counterfactuals (dissimilarity) or semifactuals (similarity). Besides providing a unified framework, the major strength of the proposal is to extend these explanations to other tasks such as regression by using an appropriate grouping measure. The proposal is validated in a regression and classification task using several neighborhoods and grouping measures.
Journal Title
Journal ISSN
Volume Title
Publisher
Elsevier
URL external
Date
Description
Keywords
Citation
Rubén R. Fernández, Isaac Martín de Diego, Javier M. Moguerza, Francisco Herrera, Explanation sets: A general framework for machine learning explainability, Information Sciences, Volume 617, 2022, Pages 464-481, ISSN 0020-0255, https://doi.org/10.1016/j.ins.2022.10.084.
Collections
Endorsement
Review
Supplemented By
Referenced By
Document viewer
Select a file to preview:
Reload



