Primal convergence from dual subgradient methods for convex optimization
Journal article, 2015

When solving a convex optimization problem through a Lagrangian dual reformulation subgradient optimization methods are favorably utilized, since they often find near-optimal dual solutions quickly. However, an optimal primal solution is generally not obtained directly through such a subgradient approach unless the Lagrangian dual function is differentiable at an optimal solution. We construct a sequence of convex combinations of primal subproblem solutions, a so called ergodic sequence, which is shown to converge to an optimal primal solution when the convexity weights are appropriately chosen. We generalize previous convergence results from linear to convex optimization and present a new set of rules for constructing the convexity weights that define the ergodic sequence of primal solutions. In contrast to previously proposed rules, they exploit more information from later subproblem solutions than from earlier ones. We evaluate the proposed rules on a set of nonlinear multicommodity flow problems and demonstrate that they clearly outperform the ones previously proposed.

Subgradient optimization

Lagrangian duality

Primal recovery

Convex programming

Nonlinear multicommodity flow problem

Ergodic convergence

Author

Emil Gustavsson

Chalmers, Mathematical Sciences, Mathematics

University of Gothenburg

Michael Patriksson

University of Gothenburg

Chalmers, Mathematical Sciences, Mathematics

Ann-Brith Strömberg

University of Gothenburg

Chalmers, Mathematical Sciences, Mathematics

Mathematical Programming, Series B

0025-5610 (ISSN)

Vol. 150 2 365-390

Nonsmooth convex optimization—theory and solution methodology

Chalmers, 1998-07-01 -- 2020-12-31.

Areas of Advance

Transport

Subject Categories

Computational Mathematics

Roots

Basic sciences

DOI

10.1007/s10107-014-0772-2

More information

Latest update

10/22/2020