Progressive Batching for Efficient Non-linear Least Squares
Paper in proceeding, 2020

Non-linear least squares solvers are used across a broad range of offline and real-time model fitting problems. Most improvements of the basic Gauss-Newton algorithm tackle convergence guarantees or leverage the sparsity of the underlying problem structure for computational speedup. With the success of deep learning methods leveraging large datasets, stochastic optimization methods received recently a lot of attention. Our work borrows ideas from both stochastic machine learning and statistics, and we present an approach for non-linear least-squares that guarantees convergence while at the same time significantly reduces the required amount of computation. Empirical results show that our proposed method achieves competitive convergence rates compared to traditional second-order approaches on common computer vision problems, such as image alignment and essential matrix estimation, with very large numbers of residuals.

Non-convex optimization, non-linear least squares

Author

Huu Le

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering, Imaging and Image Analysis

Christopher Zach

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering, Imaging and Image Analysis

Edward Rosten

Snap INC

Oliver Woodford

Snap INC

Lecture Notes in Computer Science

0302-9743 (ISSN)

Vol. 12624 LNCS 506-522

15th Asian Conference on Computer Vision
Kyoto, Virtual online, Japan,

Subject Categories

Computer Vision and Robotics (Autonomous Systems)

DOI

10.1007/978-3-030-69535-4_31

ISBN

9783030695347

More information

Latest update

5/3/2021 1