A Unified Optimization Framework for Low-Rank Inducing Penalties
Paper in proceeding, 2020

In this paper we study the convex envelopes of a new class of functions. Using this approach, we are able to unify two important classes of regularizers from unbiased nonconvex formulations and weighted nuclear norm penalties. This opens up for possibilities of combining the best of both worlds, and to leverage each method’s contribution to cases where simply enforcing one of the regularizers are insufficient. We show that the proposed regularizers can be incorporated in standard splitting schemes such as Alternating Direction Methods of Multipliers (ADMM), and other subgradient methods. Furthermore, we provide an efficient way of computing the proximal operator. Lastly, we show on real non-rigid structure-from-motion (NRSfM) datasets, the issues that arise from using weighted nuclear norm penalties, and how this can be remedied using our proposed method.

Author

Marcus Valtonen Örnhag

Lund University

Carl Olsson

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Lund University

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

10636919 (ISSN)

8471-8480 9156671

IEEE Int. Conf. Computer Vision and Pattern Recognition
Virtual; online, USA,

Optimization Methods with Performance Guarantees for Subspace Learning

Swedish Research Council (VR) (2018-05375), 2019-01-01 -- 2022-12-31.

Subject Categories

Computational Mathematics

Control Engineering

Mathematical Analysis

DOI

10.1109/CVPR42600.2020.00850

More information

Latest update

11/10/2020