Faster Algorithms and Constant Lower Bounds for the Worst-Case Expected Error
Paper i proceeding, 2021

The study of statistical estimation without distributional assumptions on data values, but with knowledge of data collection methods was recently introduced by Chen, Valiant and Valiant (NeurIPS 2020). In this framework, the goal is to design estimators that minimize the worst-case expected error. Here the expectation is over a known, randomized data collection process from some population, and the data values corresponding to each element of the population are assumed to be worst-case. Chen, Valiant and Valiant show that, when data values are ℓ∞-normalized, there is a polynomial time algorithm to compute an estimator for the mean with worst-case expected error that is within a factor π2 of the optimum within the natural class of semilinear estimators. However, their algorithm is based on optimizing a somewhat complex concave objective function over a constrained set of positive semidefinite matrices, and thus does not come with explicit runtime guarantees beyond being polynomial time in the input. In this paper we design provably efficient algorithms for approximating the optimal semilinear estimator based on online convex optimization. In the setting where data values are ℓ∞-normalized, our algorithm achieves a π2 -approximation by iteratively solving a sequence of standard SDPs. When data values are ℓ2-normalized, our algorithm iteratively computes the top eigenvector of a sequence of matrices, and does not lose any multiplicative approximation factor. Further, using experiments in settings where sample membership is correlated with data values (e.g. "importance sampling" and "snowball sampling"), we show that our ℓ2-normalized algorithm gives a similar advantage over standard estimators as the original ℓ∞-normalized algorithm of Chen, Valiant and Valiant, but with much lower computational complexity. We complement these positive results by stating a simple combinatorial condition which, if satisfied by a data collection process, implies that any (not necessarily semilinear) estimator for the mean has constant worst-case expected error.

Författare

Jonah Brown-Cohen

Data Science och AI 1

Advances in Neural Information Processing Systems

10495258 (ISSN)

Vol. 33 27709-27719
9781713845393 (ISBN)

35th Conference on Neural Information Processing Systems, NeurIPS 2021
Virtual, Online, ,

Ämneskategorier

Sannolikhetsteori och statistik

Reglerteknik

Signalbehandling

Mer information

Senast uppdaterat

2022-06-27