Interpretable Machine Learning for Modeling, Evaluating, and Refining Clinical Decision-Making
                
                        Doctoral thesis, 2025
                
            
                    First, we examine representations of a patient's medical history that support interpretable policy modeling. As history accumulates over time, creating compact summaries that capture relevant historical aspects becomes increasingly important. Our results show that simple aggregates of past data, combined with the most recent information, allow for accurate and interpretable policy modeling across decision-making tasks. We also propose methods that leverage structure in the data collection process—such as patterns in missing feature values—to further enhance interpretability.
Second, in the context of policy evaluation, we emphasize the need for assessments that go beyond estimating overall performance. Specifically, in which situations does the proposed policy differ from current practice? To address this question, we leverage case-based learning to identify a small set of prototypical cases in the observed data that reflect decision-making under current practice. We propose using these prototypes as a diagnostic tool to explain differences between policies, providing a compact and interpretable basis for validating new treatment strategies.
Third, motivated by the need for interpretable policies that are compatible with offline evaluation, we propose deriving new policies from an interpretable model of existing clinical behavior. By restricting the new policy to select from treatments most commonly observed in each patient state—as described by the model—we enable reliable evaluation. This standardization of frequent treatment patterns may reduce unwarranted practice variability and offers a promising alternative to current practice, as demonstrated in real-world examples from rheumatoid arthritis and sepsis care.
reinforcement learning
observational data
policy modeling
sequential decision-making
interpretability
off-policy evaluation
Author
Anton Matsson
Data Science and AI 3
How Should We Represent History in Interpretable Models of Clinical Policies?
Proceedings of Machine Learning Research,;Vol. 259(2024)p. 714-734
Paper in proceeding
Prediction Models That Learn to Avoid Missing Values
Proceedings of Machine Learning Research,;Vol. 267(2025)
Paper in proceeding
Case-Based Off-Policy Evaluation Using Prototype Learning
Proceedings of Machine Learning Research,;Vol. 180(2022)p. 1339-1349
Paper in proceeding
Patterns in the Sequential Treatment of Patients With Rheumatoid Arthritis Starting a Biologic or Targeted Synthetic Disease-Modifying Antirheumatic Drug: 10-Year Experience From a US-Based Registry
ACR Open Rheumatology,;Vol. 6(2024)p. 5-13
Journal article
In this thesis, we advocate the use of interpretable machine learning to model observed behavior as a means of comparing treatment strategies and assessing the quality of policy evaluations. We explore representations of patient data that support interpretable modeling and propose approaches that leverage structure in the data collection process to improve model interpretability. Furthermore, we suggest using interpretable models of current behavior to guide the development of new, evaluable policies—effectively closing the loop between three key areas: policy modeling, policy evaluation, and policy refinement. Through real-world examples from the management of rheumatoid arthritis and sepsis, this thesis contributes to the long-term goal of improving clinical decision-making in both chronic and acute care settings.
Areas of Advance
Information and Communication Technology
Health Engineering
Subject Categories (SSIF 2025)
Computer Sciences
Roots
Basic sciences
Infrastructure
Chalmers e-Commons (incl. C3SE, 2020-)
ISBN
978-91-8103-251-2
Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 5709
Publisher
Chalmers
HA2, Hörsalsvägen 4
Opponent: Research Scientist Li-wei H. Lehman, Institute for Medical Engineering & Science (IMES), MIT, USA
