Primal convergence from dual subgradient methods for convex optimization
Journal article, 2015

When solving a convex optimization problem through a Lagrangian dual reformulation subgradient optimization methods are favorably utilized, since they often find near-optimal dual solutions quickly. However, an optimal primal solution is generally not obtained directly through such a subgradient approach unless the Lagrangian dual function is differentiable at an optimal solution. We construct a sequence of convex combinations of primal subproblem solutions, a so called ergodic sequence, which is shown to converge to an optimal primal solution when the convexity weights are appropriately chosen. We generalize previous convergence results from linear to convex optimization and present a new set of rules for constructing the convexity weights that define the ergodic sequence of primal solutions. In contrast to previously proposed rules, they exploit more information from later subproblem solutions than from earlier ones. We evaluate the proposed rules on a set of nonlinear multicommodity flow problems and demonstrate that they clearly outperform the ones previously proposed.

Convex programming

Primal recovery

Ergodic convergence

Subgradient optimization

Nonlinear multicommodity flow problem

Lagrangian duality

Author

Emil Gustavsson

Chalmers, Mathematical Sciences, Mathematics

University of Gothenburg

Michael Patriksson

Chalmers, Mathematical Sciences, Mathematics

University of Gothenburg

Ann-Brith Strömberg

Chalmers, Mathematical Sciences, Mathematics

University of Gothenburg

Mathematical Programming, Series B

0025-5610 (ISSN) 14364646 (eISSN)

Vol. 150 2 365-390

Nonsmooth convex optimization—theory and solution methodology

Naturvetenskapliga Forskningsrådet, 1998-07-01 -- 2022-12-31.

Chalmers, 1998-07-01 -- 2020-12-31.

Subject Categories (SSIF 2011)

Computational Mathematics

Roots

Basic sciences

DOI

10.1007/s10107-014-0772-2

More information

Latest update

3/26/2026