Interpretable Machine Learning for Modeling, Evaluating, and Refining Clinical Decision-Making
Doktorsavhandling, 2025
First, we examine representations of a patient's medical history that support interpretable policy modeling. As history accumulates over time, creating compact summaries that capture relevant historical aspects becomes increasingly important. Our results show that simple aggregates of past data, combined with the most recent information, allow for accurate and interpretable policy modeling across decision-making tasks. We also propose methods that leverage structure in the data collection process—such as patterns in missing feature values—to further enhance interpretability.
Second, in the context of policy evaluation, we emphasize the need for assessments that go beyond estimating overall performance. Specifically, in which situations does the proposed policy differ from current practice? To address this question, we leverage case-based learning to identify a small set of prototypical cases in the observed data that reflect decision-making under current practice. We propose using these prototypes as a diagnostic tool to explain differences between policies, providing a compact and interpretable basis for validating new treatment strategies.
Third, motivated by the need for interpretable policies that are compatible with offline evaluation, we propose deriving new policies from an interpretable model of existing clinical behavior. By restricting the new policy to select from treatments most commonly observed in each patient state—as described by the model—we enable reliable evaluation. This standardization of frequent treatment patterns may reduce unwarranted practice variability and offers a promising alternative to current practice, as demonstrated in real-world examples from rheumatoid arthritis and sepsis care.
reinforcement learning
observational data
policy modeling
sequential decision-making
interpretability
off-policy evaluation
Författare
Anton Matsson
Data Science och AI 3
How Should We Represent History in Interpretable Models of Clinical Policies?
Proceedings of Machine Learning Research,;Vol. 259(2024)p. 714-734
Paper i proceeding
Prediction Models That Learn to Avoid Missing Values
Proceedings of Machine Learning Research,;Vol. 267(2025)
Paper i proceeding
Case-Based Off-Policy Evaluation Using Prototype Learning
Proceedings of Machine Learning Research,;Vol. 180(2022)p. 1339-1349
Paper i proceeding
Patterns in the Sequential Treatment of Patients With Rheumatoid Arthritis Starting a Biologic or Targeted Synthetic Disease-Modifying Antirheumatic Drug: 10-Year Experience From a US-Based Registry
ACR Open Rheumatology,;Vol. 6(2024)p. 5-13
Artikel i vetenskaplig tidskrift
In this thesis, we advocate the use of interpretable machine learning to model observed behavior as a means of comparing treatment strategies and assessing the quality of policy evaluations. We explore representations of patient data that support interpretable modeling and propose approaches that leverage structure in the data collection process to improve model interpretability. Furthermore, we suggest using interpretable models of current behavior to guide the development of new, evaluable policies—effectively closing the loop between three key areas: policy modeling, policy evaluation, and policy refinement. Through real-world examples from the management of rheumatoid arthritis and sepsis, this thesis contributes to the long-term goal of improving clinical decision-making in both chronic and acute care settings.
Styrkeområden
Informations- och kommunikationsteknik
Hälsa och teknik
Ämneskategorier (SSIF 2025)
Datavetenskap (datalogi)
Fundament
Grundläggande vetenskaper
Infrastruktur
Chalmers e-Commons (inkl. C3SE, 2020-)
ISBN
978-91-8103-251-2
Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 5709
Utgivare
Chalmers
HA2, Hörsalsvägen 4
Opponent: Research Scientist Li-wei H. Lehman, Institute for Medical Engineering & Science (IMES), MIT, USA