Asymptotically Exact Error Analysis for the Generalized l(2)(2)-LASSO
Paper in proceeding, 2015
Given an unknown signal x(0) is an element of R-n and linear noisy measurements y = Ax(0) + sigma v is an element of R-m, the generalized l(2)(2)-LASSO solves (x) over cap : = arg min(x) 1/2 vertical bar vertical bar y - Ax vertical bar vertical bar(2)(2) + sigma lambda f(x). Here, f is a convex regularization function (e.g. l(1)-norm, nuclearnorm) aiming to promote the structure of x(0) (e.g. sparse, low-rank), and, lambda >= 0 is the regularizer parameter. A related optimization problem, though not as popular or well-known, is often referred to as the generalized l(2)-LASSO and takes the form (x) over cap : = arg min(x) 1/2 vertical bar vertical bar y - Ax vertical bar vertical bar(2)(2) + lambda f(x), and has been analyzed by Oymak, Thrampoulidis and Hassibi. Oymak et al. further made conjectures about the performance of the generalized l(2)(2)-LASSO. This paper establishes these conjectures rigorously. We measure performance with the normalized squared error NSE (sigma) : = vertical bar vertical bar(x) over cap - x(0) vertical bar 2 2/(m sigma(2)). Assuming the entries of A are i.i.d. Gaussian N (0,1/m) and those of v are i.i.d. N (0, 1), we precisely characterize the "asymptotic NSE" a NSE : = lim(sigma -> 0) NSE (sigma) when the problem dimensions tend to infinity in a proportional manner. The role of lambda, f and x(0) is explicitly captured in the derived expression via means of a single geometric quantity, the Gaussian distance to the subdifferential. We conjecture that a NSE = sup(sigma>0) NSE (sigma). We include detailed discussions on the interpretation of our result, make connections to relevant literature and perform computational experiments that validate our theoretical findings.
Computer Science
Engineering
lasso
recovery
risk