Non-convex Rank/Sparsity Regularization and Local Minima
Paper in proceeding, 2017

This paper considers the problem of recovering either a low rank matrix or a sparse vector from observations of linear combinations of the vector or matrix elements. Recent methods replace the non-convex regularization with ℓ1 or nuclear norm relaxations. It is well known that this approach recovers near optimal solutions if a so called restricted isometry property (RIP) holds. On the other hand it also has a shrinking bias which can degrade the solution. In this paper we study an alternative non-convex regularization term that does not suffer from this bias. Our main theoretical results show that if a RIP holds then the stationary points are often well separated, in the sense that their differences must be of high cardinality/rank. Thus, with a suitable initial solution the approach is unlikely to fall into a bad local minimum. Our numerical tests show that the approach is likely to converge to a better solution than standard ℓ1/nuclear-norm relaxation even when starting from trivial initializations. In many cases our results can also be used to verify global optimality of our method.

Author

Carl Olsson

Chalmers, Signals and Systems, Signal Processing and Biomedical Engineering

Lund University

Marcus Carlsson

Lund University

Fredrik Andersson

Lund University

Viktor Larsson

Lund University

Proceedings of the IEEE International Conference on Computer Vision

15505499 (ISSN)

Vol. 2017-October 332-340 8237306
978-1-5386-1032-9 (ISBN)

16th IEEE International Conference on Computer Vision, ICCV 2017
Venice, Italy,

Subject Categories

Computational Mathematics

Probability Theory and Statistics

Mathematical Analysis

DOI

10.1109/ICCV.2017.44

More information

Latest update

1/3/2024 9