Journal article, 2015

When solving a convex optimization problem through a Lagrangian dual reformulation subgradient optimization methods are favorably utilized, since they often find near-optimal dual solutions quickly. However, an optimal primal solution is generally not obtained directly through such a subgradient approach unless the Lagrangian dual function is differentiable at an optimal solution. We construct a sequence of convex combinations of primal subproblem solutions, a so called ergodic sequence, which is shown to converge to an optimal primal solution when the convexity weights are appropriately chosen. We generalize previous convergence results from linear to convex optimization and present a new set of rules for constructing the convexity weights that define the ergodic sequence of primal solutions. In contrast to previously proposed rules, they exploit more information from later subproblem solutions than from earlier ones. We evaluate the proposed rules on a set of nonlinear multicommodity flow problems and demonstrate that they clearly outperform the ones previously proposed.

Lagrangian duality

Ergodic convergence

Subgradient optimization

Primal recovery

Convex programming

Nonlinear multicommodity flow problem

University of Gothenburg

Chalmers, Mathematical Sciences, Mathematics

University of Gothenburg

Chalmers, Mathematical Sciences, Mathematics

University of Gothenburg

Chalmers, Mathematical Sciences, Mathematics

0025-5610 (ISSN)

Vol. 150 2 365-390Transport

Computational Mathematics

Basic sciences

10.1007/s10107-014-0772-2