Primal convergence from dual subgradient methods for convex optimization
Artikel i vetenskaplig tidskrift, 2015

When solving a convex optimization problem through a Lagrangian dual reformulation subgradient optimization methods are favorably utilized, since they often find near-optimal dual solutions quickly. However, an optimal primal solution is generally not obtained directly through such a subgradient approach unless the Lagrangian dual function is differentiable at an optimal solution. We construct a sequence of convex combinations of primal subproblem solutions, a so called ergodic sequence, which is shown to converge to an optimal primal solution when the convexity weights are appropriately chosen. We generalize previous convergence results from linear to convex optimization and present a new set of rules for constructing the convexity weights that define the ergodic sequence of primal solutions. In contrast to previously proposed rules, they exploit more information from later subproblem solutions than from earlier ones. We evaluate the proposed rules on a set of nonlinear multicommodity flow problems and demonstrate that they clearly outperform the ones previously proposed.

Subgradient optimization

Lagrangian duality

Primal recovery

Convex programming

Nonlinear multicommodity flow problem

Ergodic convergence


Emil Gustavsson

Chalmers, Matematiska vetenskaper, Matematik

Göteborgs universitet

Michael Patriksson

Göteborgs universitet

Chalmers, Matematiska vetenskaper, Matematik

Ann-Brith Strömberg

Göteborgs universitet

Chalmers, Matematiska vetenskaper, Matematik

Mathematical Programming, Series B

0025-5610 (ISSN)

Vol. 150 2 365-390

Icke-differentierbar konvex optimering - teori och lösningsmetodik

Chalmers, 1998-07-01 -- 2020-12-31.

Naturvetenskapliga Forskningsrådet, 1998-07-01 -- 2022-12-31.






Grundläggande vetenskaper



Mer information

Senast uppdaterat