Progressive Batching for Efficient Non-linear Least Squares
Paper in proceeding, 2021

Non-linear least squares solvers are used across a broad range of offline and real-time model fitting problems. Most improvements of the basic Gauss-Newton algorithm tackle convergence guarantees or leverage the sparsity of the underlying problem structure for computational speedup. With the success of deep learning methods leveraging large datasets, stochastic optimization methods received recently a lot of attention. Our work borrows ideas from both stochastic machine learning and statistics, and we present an approach for non-linear least-squares that guarantees convergence while at the same time significantly reduces the required amount of computation. Empirical results show that our proposed method achieves competitive convergence rates compared to traditional second-order approaches on common computer vision problems, such as image alignment and essential matrix estimation, with very large numbers of residuals.

Non-convex optimization, non-linear least squares

Author

Huu Le

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Christopher Zach

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Edward Rosten

Snap INC

Oliver Woodford

Snap INC

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

03029743 (ISSN) 16113349 (eISSN)

Vol. 12624 LNCS 506-522
9783030695347 (ISBN)

15th Asian Conference on Computer Vision
Kyoto, Virtual online, Japan,

Subject Categories

Computer Vision and Robotics (Autonomous Systems)

DOI

10.1007/978-3-030-69535-4_31

More information

Latest update

3/21/2023