Estimation of Utility-Maximizing Bounds on Potential Outcomes
Paper in proceeding, 2020
We show that, in such cases, we can improve sample efficiency by estimating simple functions that bound these outcomes instead of estimating their
conditional expectations, which may be complex and hard to estimate. Our analysis highlights a trade-off between the complexity of the learning
task and the confidence with which the learned bounds hold. Guided by these findings, we develop an algorithm for learning upper and lower
bounds on potential outcomes which optimize an objective function defined by the decision maker, subject to the probability that bounds are violated
being small. Using a clinical dataset and a wellknown causality benchmark, we demonstrate that our algorithm outperforms baselines, providing tighter, more reliable bounds.
Author
Maggie Makar
Massachusetts Institute of Technology (MIT)
Fredrik Johansson
Chalmers, Computer Science and Engineering (Chalmers), Data Science
John Guttag
Massachusetts Institute of Technology (MIT)
David Sontag
Massachusetts Institute of Technology (MIT)
Proceedings of the 37th International Conference on Machine Learning
Vol. 119
, ,
WASP AI/MLX Professorship
Wallenberg AI, Autonomous Systems and Software Program, 2019-08-01 -- 2023-08-01.
Subject Categories
Probability Theory and Statistics
Computer Science
DOI
10.48550/arXiv.1910.04817