Introduction to Continuous Optimization
Bok, 2013

This second edition introduces several areas and items that were not included in the first edition, as well as several corrections. A brief summary of these changes are given next. Chapter 1 includes a discussion on the diet problem, in addition to that on the staff planning problem, in order to very early on introduce linear programming. Figure 1.1 now has the terminating box ``Implementation'', whereas the original one had an infinite loop! Chapter 3 has been enriched by several new results on separating and supporting hyperplanes, and the associated theory of convex cones and their polar sets. Thanks to this study of separating hyperplanes,Theorem 5.17 on the necessity of the Fritz John conditions now has a complete proof. The end of Chapter 5 also includes a summary of the fascinating story of the development of the Karush--Kuhn--Tucker conditions. The sensitivity analysis in linear programming has been expanded with a discussion in Section 10.5.3 on the addition of a variable or a constraint, as well as an introduction to column generation based on the example of the minimum cost multi-commodity network flow problem (Section 10.6). Chapter 11 includes a brief discussion on Gauss--Newton methods for least-squares problems. Chapter 12 has changed its name from ``Optimization over convex sets'' to ``Feasible-direction methods,'' in order to reflect the fact that the scope is now wider---from essentially polyhedral sets to general closed sets (which, however, most often will be assumed to be convex). In particular, we have added new sections on algorithms defined by closed descent maps---an algorithm principle which was devised and analyzed mainly in the 1960s, and which is a quite elegant means to describe iterative methods. We also utilize this principle to contrast established convergent methods (such as the Frank--Wolfe method) and failed attempts (such as the algorithm of Zoutendijk). We have also added a brief discussion on reduced gradient methods, which are relatives to the simplex method; they are---in their original statement---not convergent, but a small adjustment results in a closed descent map and hence a convergent method. Exercises and their solutions are now placed at the end of the book, rather than at the end of each chapter. The first edition from 2005 has been used in teaching of several courses at Chalmers University of Technology and the University of Gothenburg. We wish to thank all the students who have given us remarks on the book. We would also like to thank Dr. Kin Cheong Sou for remarks and corrections on the first edition.

Författare

Michael Patriksson

Göteborgs universitet

Chalmers, Matematiska vetenskaper, Matematik

Niclas Andréasson

Anton Evgrafov

Chalmers, Matematiska vetenskaper

Göteborgs universitet

Emil Gustavsson

Göteborgs universitet

Chalmers, Matematiska vetenskaper, Matematik

Magnus Önnheim

Göteborgs universitet

Chalmers, Matematiska vetenskaper, Matematik

Ämneskategorier

Beräkningsmatematik

Fundament

Grundläggande vetenskaper

Lärande och undervisning

Pedagogiskt arbete

ISBN

9789144060774

Mer information

Skapat

2017-10-07