Conditional Subgradient Methods and Ergodic Convergence in Nonsmooth Optimization
Doctoral thesis, 1997

The topic of the thesis is subgradient optimization methods in convex, nonsmooth optimization. These methods are frequently used, especially in the context of Lagrangean relaxation of large scale mathematical programs where they are remarkably often able to quickly identify near-optimal Lagrangean dual solutions. We present extensions of this class of methods, insights into their theoretical properties, and numerical evaluations. The thesis consists of an introductory chapter and three research papers. In the first paper, we generalize classical subgradient optimization methods in the sense that the feasible set is taken into consideration when the step directions are determined, and establish the convergence of the resulting conditional subgradient optimization methods. A special case of these methods is obtained when the subgradient is projected onto the active constraints before the step is taken; this method is numerically evaluated in three applications, which show that its performance is significantly better than that of classical subgradient methods. In the second paper, we consider a nonsmooth, convex program solved by a conditional subgradient optimization scheme, and establish that the elements of an ergodic (averaged) sequence of subgradients in the limit fulfil the optimality conditions. This result enables the finite identification of active constraints at the solution obtained in the limit; it is also used to establish the ergodic convergence of sequences of multipliers. Further, it implies the convergence of a lower bounding procedure, thus providing a proper termination criterion for subgradient methods. Finally, we develop and establish the convergence of a simplicial decomposition scheme for nonsmooth optimization. In the third paper, we consider the application of a conditional subgradient optimization method to a Lagrangean dual formulation of a convex program. Normally, dual subgradient schemes produce neither primal feasible nor primal optimal solutions automatically. We establish that an ergodic sequence of Lagrangean subproblem solutions converges to the primal optimal set. Numerical experiments show that the primal solution thus generated are of considerably higher quality than the Lagrangean subproblem solutions produced by the subgradient scheme.

ergodic convergence

nonsmooth optimization

primal convergence

subgradient methods

conditional subgradient

Convex programming

Lagrangean relaxation

Opponent: Prof. Vladimir F. Dem'yanov


Ann-Brith Strömberg

Linköping University

Conditional subgradient optimization - theory and applications

European Journal of Operational Research,; Vol. 88(1996)p. 382-403

Journal article

Ergodic convergence in subgradient optimization

Optimization Methods and Software,; Vol. 9(1998)p. 93-120

Journal article

Ergodic, primal convergence in dual subgradient schemes for convex programming

Mathematical Programming,; Vol. 86(1999)p. 283-312

Journal article

Nonsmooth convex optimization—theory and solution methodology

Naturvetenskapliga Forskningsrådet, 1998-07-01 -- 2022-12-31.

Chalmers, 1998-07-01 -- 2020-12-31.

Areas of Advance



Subject Categories

Computational Mathematics


Basic sciences





Opponent: Prof. Vladimir F. Dem'yanov

More information

Latest update