Abstract
In the era of data-driven medicine, the development of trustworthy Artificial Intelligence (AI)
systems that can effectively handle heterogeneous and irregular clinical data has become essential.
Among the most pressing applications is the early prediction of Multidrug Resistance (MDR) in
Intensive Care Units, where accurate and interpretable decision-support tools can directly influence
patient outcomes. This doctoral dissertation proposes a suite of machine learning and graphbased
architectures tailored to analyze real-world, multivariate Electronic Health Records, with
the primary objective of enabling early detection of MDR at Intensive Care Units while preserving
clinical transparency. By integrating methodologies from Time Series (TS) modeling, graph signal
processing, and Explainable Artificial Intelligence, the thesis offers a comprehensive framework
that jointly optimizes predictive performance and explainability—two pillars of responsible AI in
healthcare.
The dissertation is structured around four core objectives. First, a clinically-informed preprocessing
pipeline is designed to handle a large-scale, 16-year Electronic Health Record dataset
from the University Hospital of Fuenlabrada. This step ensures patient synchronization, temporal
consistency, and meaningful label generation based on microbiological cultures, thus establishing
a robust foundation for temporal modeling. Second, a patient-to-patient similarity framework is
introduced, leveraging dynamic time warping, time cluster kernel, and feature extraction to embed
Multivariate Time Series (MTS) into interpretable similarity spaces. These embeddings support
graph-based clustering and classification using classical machine learning models such as logistic
regression, random forests, and support vector machines, achieving a competitive Receiver Operating
Characteristic Area Under the Curve (ROC AUC) of up to 0.810. Unlike typical deep learning
approaches, this lightweight method preserves explainability while delivering strong performance,
enabling the identification of clinical patterns linked to MDR. Notably, non-MDR cases were associated
with the presence of CF3, while MDR-positive patients showed increased exposure to
mechanical ventilation and co-patient antibiotic usage over time, particularly from the PEN and
AMG families. These insights demonstrate the framework’s ability to extract actionable information
directly from raw MTS, supporting explainable and anticipatory decision-making in ICU settings.
However, the method did not capture spatial dependencies or provide temporal explainability,
both of which are addressed in subsequent contributions. Third, a novel explainable deep learning
architecture—Explainable Spatio-Temporal Graph Convolutional Neural Network—is proposed
to model spatio-temporal dependencies in MTS. This architecture integrates graph convolutional
layers over temporally estimated adjacency matrices constructed using correlation, smoothness
constraints, and a novel Heterogeneous Gower Distance. Two graph structures—Cartesian product
graphs and spatio-temporal graphs—are evaluated. The model achieves robust predictive
performance (ROC AUC 0.810 ± 0.024), while attention mechanisms enable intrinsic explainability
by assigning variable-level importance scores across time. Attention analysis reveals clinically
meaningful patterns, such as early exposure to antibiotics (e.g., CAR), organ dysfunction markers
(e.g., renal and respiratory failure), and co-treatment with neighboring patients during the first 24
hours—consistently associated with increased MDR risk. These findings, validated across real and
xvi Abstract
synthetic datasets, confirm the model’s ability to preserve the structural and temporal complexity
of clinical data while delivering interpretable insights. Fourth, the thesis addresses a critical gap
in temporal explainability for MTS-to-TS tasks. Most traditional Explainable Artificial Intelligence
methods provide static or global importance scores and fail to capture how feature relevance evolves
over time. To overcome this limitation, three novel methods are introduced: (i) a pre-hoc approach
based on causal conditional mutual information, (ii) an intrinsic Hadamard attention mechanism,
and (iii) a post-hoc method called Irregular Time SHapley Additive Explanation (IT-SHAP). These
methods offer fine-grained, temporally resolved interpretations of MDR risk trajectories, allowing
clinicians to understand not only what the model predicts, but also when and why specific variables
contribute. IT-SHAP uncovered consistent patterns aligned with clinical reasoning: early Staphylococcus
and Pseudomonas cultures and multiorgan failure were strong MDR predictors, whereas
insulin therapy and artificial nutrition were associated with non-MDR profiles. Additionally, the
importance of bacterial cultures decreased over time, potentially reflecting treatment effects. Expert
validation confirmed the practical utility of these methods for supporting explainable, timely,
and actionable decision-making in critical care.
Results demonstrate that temporal and relational modeling are critical for developing robust
AI systems in clinical settings. By combining explainable MTS representations, adaptive graph
estimation strategies, and time-resolved explainability, this research contributes to the growing
field of trustworthy AI in healthcare. The proposed architectures empower clinicians to monitor
the probability of MDR onset, identify patient-specific risk patterns, and intervene proactively,
supported by transparent and data-driven evidence.
Finally, this work identifies several promising directions for future research, including the use
of topological data analysis for enhanced graph estimation, the integration of static patient variables
into multimodal graph architectures, and the exploration of novel explainability methods
for MTS-to-TS inference tasks. These advances aim to further personalize, stabilize, and clarify
clinical predictions, reinforcing explainable graph modeling as a cornerstone in the development of
trustworthy AI systems for critical care decision support.
Journal Title
Journal ISSN
Volume Title
Publisher
Universidad Rey Juan Carlos
Date
Description
Tesis Doctoral leída en la Universidad Rey Juan Carlos de Madrid en 2025. Director/a: Cristina Soguero Ruiz;
Antonio García Marqués
Citation
Escudero Arnanz, Ó. (2025). Spatio-Temporal Machine Learning Architectures for Clinical Multivariate Time Series: Toward Accurate and Explainable Prediction in ICUs (Tesis doctoral). Escuela Internacional de Doctorado, Universidad Rey Juan Carlos. https://doctorado.urjc.es/tesis/1821
Collections
Endorsement
Review
Supplemented By
Referenced By
Document viewer
Select a file to preview:
Reload



