19
Interior Point Methods of Mathematical Programming

Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: [email protected]

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

Interior Point Methods of Mathematical Programming

Page 2: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

Applied Optimization

Volume 5

Series Editors:

Panos M. Pardalos University of Florida, U.SA.

Donald Hearn University of Florida, U.S.A.

The titles published in this series are listed at the end of this volume.

Page 3: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

Interior Point Methods of Mathematical Programming

Edited by

Tamas Terlaky Delft University a/Technology

KLUWER ACADEMIC PUBLISHERS DORDRECHT I BOSTON I LONDON

Page 4: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

A C.I.P. Catalogue record for this book is available from the Library of Congress.

ISBN-13: 978-1-4613-3451-4 e-ISBN-13: 978-1-4613-3449-1 DOl: 10.1007/978-1-4613-3449-1

Published by Kluwer Academic Publishers, P.O. Box 17, 3300 AA Dordrecht, The Netherlands.

Kluwer Academic Publishers incorporates the publishing programmes of D. Reidel, Martinus NiJ'hoff, Dr W. Junk and MTP Press.

Sold and distributed in the U.S.A. and Canada by Kluwer Academic Publishers, 101 Philip Drive, Norwell, MA 02061, U.S.A.

In all other countries, sold and distributed by Kluwer Academic Publishers Group, P.O. Box 322, 3300 AH Dordrecht, The Netherlands.

Printed on acid-free paper

All Rights Reserved @ 19% Kluwer Academic Publishers No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording or by any information storage and retrieval system, without written permission from the copyright owner.

Page 5: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

This book is dedicated to the memory of Professor Gyorgy Sonnevend, the father of analytic centers.

Page 6: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

CONTENTS

PREFACE

Part I LINEAR PROGRAMMING

1 INTRODUCTION TO THE THEORY OF INTERIOR POINT METHODS Benjamin Jansen, Cornelis Roos, Tamas Terlaky 1.1 The Theory of Linear Programming

1.2 Sensitivity Analysis in Linear Programming

1.3 Concluding Remarks REFERENCES

2 AFFINE SCALING ALGORITHM Takashi Tsuchiya 2.1 Introduction 2.2 Problem and Preliminaries 2.3 The Affine Scaling Algorithm

xv

1

3 3

14

30 31

35 35 38 40

2.4 Nondegeneracy Assumptions 47 2.5 Basic Properties of the Iterative Process 50 2.6 Global Convergence Proof Under a Nondegeneracy Assumption 54

2.7 Global Convergence Proof Without Nondegeneracy Assumptions 56 2.8 The Homogeneous Affine Scaling Algorithm 59

2.9 More on the Global Convergence Proof of the Affine Scaling Algorithm 67

2.10 Why Two-Thirds is Sharp for the Affine Scaling? 68 2.11 Superlinear Convergence of the Affine Scaling Algorithm 69 2.12 On the Counterexample of Global Convergence of The Affine

Scaling Algorithm 70

VB

Page 7: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

Vlll INTERIOR POINT METHODS IN MATHEMATICAL PROGRAMMING

2.13 Concluding Remarks 73

2.14 Appendix: How to Solve General LP Problems with the Affine Scaling Algorithm 75

REFERENCES 77

3 TARGET-FOLLOWING METHODS FOR LINEAR PROGRAMMING Benjamin Jansen, Cornelis Roos, Tamas Terlaky 3.1 Introduction

3.2 Short-step Primal-dual Algorithms for LP 3.3 Applications

3.4 Concluding Remarks REFERENCES

4 POTENTIAL REDUCTION ALGORITHMS Kurt M. Anstreicher 4.1 Introduction 4.2 Potential Functions for Linear Programming 4.3 Karmarkar's Algorithm

4.4 The Affine Potential Reduction Algorithm 4.5 The Primal-Dual Algorithm

4.6 Enhancements and Extensions

REFERENCES

5 INFEASIBLE-INTERIOR-POINT ALGORITHMS Shinji Mizuno 5.1 Introduction 5.2 An lIP Algorithm Using a Path of Centers

5.3 Global Convergence 5.4 Polynomial Time Convergence 5.5 An lIP Algorithm Using a Surface of Centers 5.6 A Predictor-corrector Algorithm 5.7 Convergence Properties 5.8 Concluding Remarks REFERENCES

83 83 86

93 121 121

125

125 126 130 134 139 142

151

159 159 161 164 172 175 178 181 184 185

Page 8: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

Contents

6 IMPLEMENTATION OF INTERIOR-POINT METHODS FOR LARGE SCALE LINEAR PROGRAMS Erling D. Andersen, Jacek Gondzio, Csaba Meszaros, Xiaojie Xu 6.1 Introduction 6.2 The Primal-dual Algorithm

6.3 Self-dual Embedding

6.4 Solving the Newton Equations 6.5 Presolve 6.6 Higher Order Extensions 6.7 Optimal Basis Identification 6.8 Interior Point Software 6.9 Is All the Work Already Done? 6.10 Conclusions

REFERENCES

Part II CONVEX PROGRAMMING

7 INTERIOR-POINT METHODS FOR CLASSES OF CONVEX PROGRAMS Florian Jarre 7.1 The Problem and a Simple Method 7.2 Self-Concordance 7.3 A Basic Algorithm 7.4 Some Applications REFERENCES

8 COMPLEMENTARITY PROBLEMS Akiko Yoshise 8.1 Introduction 8.2 Monotone Linear Complementarity Problems 8.3 Newton's Method and the Path of Centers 8.4 Two Prototype Algorithms for the Monotone LCP 8.5 Computational Complexity of the Algorithms

IX

189 190

193 200

204 225

230 235 240

243 244 245

253

255 256

258 281 291

293

297 297 300

308 316 332

Page 9: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

x INTERIOR POINT METHODS IN MATHEMATICAL PROGRAMMING

8.6 Further Developments and Extensions

8.7 Proofs of Lemmas and Theorems REFERENCES

9 SEMIDEFINITE PROGRAMMING Motakuri V. Ramana, Panos M. Pardalos 9.1 Introduction 9.2 Geometry and Duality

9.3 Algorithms and Complexity

9.4 Applications

9.5 Concluding Remarks

REFERENCES

10 IMPLEMENTING BARRIER METHODS FOR NONLINEAR PROGRAMMING David F. Shanno, Mark G. Breitfeld, Evangelia M. Simantiraki 10.1 Introduction 10.2 Modified Penalty-Barrier Methods 10.3 A Slack Variable Alternative 10.4 Discussion and Preliminary Numerical Results

REFERENCES

Part III APPLICATIONS, EXTENSIONS

11 INTERIOR POINT METHODS FOR COMBINATORIAL OPTIMIZATION John E. Mitchell 11.1 Introduction 11.2 Interior Point Branch and Cut Algorithms 11.3 A Potential Function Method 11.4 Solving Network Flow Problems 11.5 The Multicommodity Network Flow Problem 11.6 Computational Complexity Results 11.7 Conclusions

REFERENCES

339 345

359

369 369 370 377 383 390

391

399 399 402 407 411 413

415

417 417 419 441 445 451 455 457 459

Page 10: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

Contents

12 INTERIOR POINT METHODS FOR GLOBAL OPTIMIZATION

Xl

Panos M. Pardalos, Mauricio G. C. Resende 467 12.1 Introduction 467 12.2 Quadratic Programming 468 12.3 Nonconvex Potential Function Minimization 474 12.4 Affine Scaling Algorithm for General Quadratic Programming 486 12.5 A Lower Bounding Technique 490 12.6 Nonconvex Complementarity Problems 493 12.7 Concluding Remarks 497 REFERENCES 497

13 INTERIOR POINT APPROACHES FOR THE VLSI PLACEMENT PROBLEM Anthony Vannelli, Andrew Kennings, Paulina Chin 501

13.1 Introduction 501 13.2 A Linear Program Formulation of the Placement Problem 503 13.3 A Quadratic Program Formulation of the MNP Placement Model 509 13.4 Towards Overlap Removal 512 13.5 Primal-Dual Quadratic Interior Point Methods 514 13.6 Numerical Results 518 13.7 Conclusions 524 REFERENCES 526

Page 11: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

Erling D. Andersen Department of Management Odense University Campusvej 55 DK-5230 Odense M, Denmark e-mail: [email protected]

Kurt M. Anstreicher School of Business Administration The University of Iowa Iowa City, Iowa 52242, USA e-mail: [email protected]. uiowa.edu

Mark G. Breitfeld A.T. Kearny, GmbH. Stuttgart, Germany

Paulina Chin Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, CANADA N2L 3G1 e-mail: [email protected]

J acek Gondzio Logilab, HEC Geneva Section of Management Studies University of Geneva 102 Bd Carl Vogt CH-1211 Geneva 4, Switzerland e-mail: [email protected] (on leave from the Systems Research Institute Polish Academy of Sciences Newelska 6, 01-447 Warsaw, Poland)

xiii

CONTRIBUTORS

Benjamin Jansen Faculty of Technical Mathematics and Computer Science Delft University of Technology Mekelweg 4, 2628 CD, Delft The Netherlands e-mail: [email protected]

Florian J arre Institut fiir Angewandte Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: [email protected]

Andrew Kennings Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, CANADA N2L 3G1 e-mail: [email protected]

Csaba Meszaros Department of Operations Research and Decision Support Systems Computer and Automation Institute Hungarian Academy of Sciences Lagymanyosi u. 11 Budapest, Hungary [email protected]

John E. Mitchell Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180, USA e-mail: [email protected]

Page 12: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

XIV

Shinji Mizuno Department of Prediction and Control The Institute of Statistical Mathematics Minata-ku, Tokyo 106, Japan e-mail: [email protected]

Panos M. Pardalos Department of Industrial and Systems Engi­neering 303 Well Hall University of Florida Gainesville Florida, FL 32611-9083 USA e-mail: [email protected]

Motakuri V. Ramana Department of Industrial and Systems Engi­neering 303 Wei! Hall University of Florida Gainesville Florida, FL 32611-9083 USA e-mail: [email protected]

Mauricio G.C. Resende AT&T Bell Laboratories Murray Hill New Jersey 09794 USA e-mail: [email protected]

Cornelis Roos Faculty of Technical Mathematics and Computer Science Delft University of Technology Mekelweg 4, 2628 CD, Delft The Netherlands e-mail: [email protected]

David F. Shanno RUTCOR, Rutgers University New Brunswick, New Jersey, USA e-mail: [email protected]

CONTRIBUTORS

Evangelia M. Simantiraki RUTCOR and Graduate School of Management Rutgers University New Brunswick, New Jersey, USA e-mail: [email protected]

Tamas Terlaky Faculty of Technical Mathematics and Computer Science Delft University of Technology Mekelweg 4, 2628 CD, Delft The Netherlands e-mail: [email protected]

Takashi Tsuchiya The Institute of Statistical Mathematics Department of Prediction and Control 4-6-7 Minami-Azabu, Minata-ku, Tokyo 106, Japan e-mail: [email protected]

Anthony Vannelli Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, CANADA N2L 3Gl e-mail: [email protected]

Xiaojie Xu X_Soft P.O. Box 7207 University, MS 38677-7207, USA (on leave from the Institute of Systems Science Chinese Academy of Sciences Beijing 100080, China) e-mail:[email protected]

Akiko Yoshise Institute of Socia-Economic Planning University of Tsukuba Tsukuba, Ibaraki 305, Japan e-mail: [email protected]

Page 13: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

PREFACE

One has to make everything as simple as possible but, never more simple. Albert

Einstein

Discovery consists of seeing what every­body has seen and thinking what nobody has thought. Albert S.ent_Gyorgy;

The primary goal of this book is to provide an introduction to the theory of Interior Point Methods (IPMs) in Mathematical Programming. At the same time, we try to present a quick overview of the impact of extensions of IPMs on smooth nonlinear optimization and to demonstrate the potential of IPMs for solving difficult practical problems.

The Simplex Method has dominated the theory and practice of mathematical pro­gramming since 1947 when Dantzig discovered it. In the fifties and sixties several attempts were made to develop alternative solution methods. At that time the prin­cipal base of interior point methods was also developed, for example in the work of Frisch (1955), Caroll (1961), Huard (1967), Fiacco and McCormick (1968) and Dikin (1967). In 1972 Klee and Minty made explicit that in the worst case some variants of the simplex method may require an exponential amount of work to solve Linear Programming (LP) problems. This was at the time when complexity theory became a topic of great interest. People started to classify mathematical programming prob­lems as efficiently (in polynomial time) solvable and as difficult (NP-hard) problems. For a while it remained open whether LP was solvable in polynomial time or not. The break-through resolution ofthis problem was obtained by Khachijan (1989). His analysis, based on the ellipsoid method, proved that LP and some special convex programming problems are polynomially solvable. However, it soon became clear that in spite of its theoretical efficiency, the ellipsoid method was not a challenging competitor of the simplex method in practice.

The publication of Karmarkar's paper (1984) initiated a new research area that is now referred to as Interior Point Methods (IPMs). IPMs for LP not only have better polynomial complexity than the ellipsoid method, but are also very efficient

xv

Page 14: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

XVI INTERIOR POINT METHODS IN MATHEMATICAL PROGRAMMING

in practice. Since the publication of Karmarkar's epoch-making paper, more than 3000 papers have been published related to interior point methods. It is impossible to summarize briefly the tremendous amount of intellectual effort that was invested in working out all the details necessary for a comprehensive theory, and successful implementation, of IPMs. This volume's primary intent is to give an introduction to and an overview of the field of IPMs for non-experts. We also hope that the surveys collected here contain useful additional information and provide new points of view for experts.

This book is divided into three parts. Part I summarizes the basic techniques, concepts and algorithmic variants ofIPMs for linear programming. Part II is devoted to specially structured and smooth convex programming problems, while Part III illustrates some application areas. The authors of the different chapters are all experts in the specific areas. The content of the thirteen chapters is briefly described below.

Part I: Linear Programming contains six chapters. Chapter 1, Introduction to the Theory of Interior Point Methods, introduces the basic notion of the central path, studies its elementary properties, and gives a stand-alone treatment of the duality theory of LP using concepts and tools of IPMs. This part establishes that IPMs can be presented as a self-supporting theory, independent of the classical approach based on the simplex method. The skew-symmetric self-dual embedding introduced here is not only a tool to prove duality theory, but also pro­vides a perfect solution to the initialization problem faced by all IPMs. In addition, this chapter shows how sensitivity and postoptimal parametric analysis can be done correctly, and how this analysis might profit from the extra information provided by interior solutions. The authors, B. Jansen, C. Roos and T. Terlaky, are members of the optimization group of the Delft University of Technology, The Netherlands. In recent years this group made significant contributions to the field of IPMs. B. Jansen defended his Ph.D. Thesis in Jan­uary 1996 on IPMs; C. Roos was one of the first in Europe who recognized the significance of IPMs and, together with J.-Ph. Vial, developed path following barrier methods; T. Terlaky is known in the optimization community not only as an active member of the IPM community but also as the author of the criss-cross method for linear and oriented matroid programming.

Chapter 2, Affine Scaling Algorithms, gives a survey of the results concerning affine scaling algorithms introduced and studied first by 1.1. Dikin in 1967. Conceptually these algorithms are the simplest IPMs, being based on repeatedly optimizing a linear function on a so-called Dikin ellipsoid inside the feasible region. The affine scaling algorithms were rediscovered after 1984, and the first implementations of IPMs were based on these methods. Unfortunately no polynomial complexity result

Page 15: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

Preface XVll

is available for affine scaling methods, and it is generally conjectured that such a result is impossible. Even to prove global convergence without any non-degeneracy assumption is quite difficult. This chapter surveys the state of the art results in the area. The author, T. Tsuchiya (The Institute of Statistical Mathematics, Tokyo, Japan) is well known as the leading expert on affine scaling methods. He has contributed to virtually all of the important results which lead to global convergence proofs without non-degeneracy

assumptions.

Chapter 3, Target Following Methods for Linear Programming, presents a unifying view of primal, dual and primal-dual methods. Almost all IPMs follow a path (the central path, or a weighted path) or some sequence of reference points that leads to optimality, or to a specific central point of the feasible region. The sequence of reference points is called the "target sequence." Newton steps (possibly damped) are made to get close to the current target. Closeness is measured by an appropriate proximity measure. This framework facilitates a unified analysis of most IPMs, including efficient centering techniques. For information about the authors, B. Jansen, C. Roos and T. Terlaky, see the information

following the description of Chapter 1.

Chapter 4, Potential Reduction Algorithms, is included due to the primary historical importance of potential reduction methods: Karmarkar's seminal paper presented a polynomial, projective potential reduction method for LP. After giving an elegant treatment of Karmarkar's projective algorithm, this chapter discusses some versions of the affine potential reduction method and the primal-dual potential reduction method. Several extensions and enhancements of potential reduction algorithms are also briefly described. This survey is given by K.M. Anstreicher from The University of Iowa. In the past ten years he has worked primarily on projective and potential reduction methods. He also showed the equivalence of the classical SUMT code and modern polynomial barrier methods. Most recently his research has considered IPMs based on the volumetric barrier.

Chapter 5, Infeasible Interior Point Methods, discusses the (for the time being, at least) most practical IPMs. These algorithms require extending the concept of the central path to infeasible solutions. Infeasible IPMs generate iterates that are infea­sible for the equality constraints, but still require that the iterates stay in the interior of the positive orthant. Optimality and feasibility are reached simultaneously. In­feasibility of either the primal or the dual problem is detected by divergence of the iterates. This chapter is written by S. Mizuno (The Institute of Statistical Mathematics, Tokyo, Japan) who has contributed to several different areas of IPMs. He was one of the first who

Page 16: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

XVlll INTERIOR POINT METHODS IN MATHEMATICAL PROGRAMMING

proposed primal-dual methods, made significant contributions to the theory of IPMs for complementarity problems, and is one of the most active researchers on infeasible IPMs.

Chapter 6, Implementation Issues, discusses all the ingredients that are needed for an efficient, robust implementation of IPMs for LP. After presenting a prototype infeasible IPM, the chapter discusses preprocessing techniques, elements and algo­rithms of sparse linear algebra, adaptive higher order methods, initialization, and stopping strategies. The effect of centering, cross-over and basis identification tech­niques are studied. Finally some open problems are presented. The authors, E.D. Andersen (Denmark), J. Gondzio (Poland and Switzerland), Cs. Meszaros (Hungary) and X. Xu (China and USA), are prominent members of the new generation of people who have developed efficient, state-of-the-art optimization software. Each one has his own high performance IPM code, and each code has its own strong points. Andersen's code has the most advanced basis-identification and cross-over, Gondzio's code is the best in preprocessing and Meszaros' has the most efficient and flexible implementation of sparse linear algebra. Xu's code is based on the skew-symmetric embedding discussed in Chapter 1, and is therefore the most reliable in detecting unboundedness and infeasibilities.

Part II: Convex Programming contains four chapters. Chapter 7, Interior Point Methods for Classes of Convex Programs, presents the generalization of polynomial IPMs for smooth convex programs. The smoothness conditions of self-concordance and self-limitation are motivated and defined. Several examples illustrate the concepts and ease the understanding. After presenting a prototype polynomial algorithm, and an implementable variant, several classes of structured convex programs are considered that satisfy the imposed smoothness condition. The chapter is written by F. Jarre, who wrote his Ph.D. and Habilitation theses on IPMs for convex optimization. He was one of the first who proved polynomial convergence of IPMs for quadratically constrained convex programs and programs satisfying a certain Relative­Lipshitz condition. Recently he started working on an efficient implementation of IPMs for large scale convex programs, more specifically for problems arising from structural design.

Chapter 8, Complementarity Problems, gives an extensive survey of polynomiality re­sults of IPMs for linear and non-linear complementarity problems. Primal-dual IPMs generalize relatively easily to linear complementarity problems, at least if the coeffi­cient matrix satisfies some additional condition. Here feasible and infeasible IPMs for linear complementarity problems with appropriate matrices are discussed. The gen­eralization for non-linear complementarity problems is far from trivial. Smoothness conditions similar to those discussed in Chapter 7 are needed. Further extensions to variational inequalities are also mentioned. The author, A. Yoshise (The University of Tsukuba, Japan) worked for years together with a group of Japanese researchers who pioneered primal-dual IPMs for LP and complementar-

Page 17: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

Preface XIX

ity problems. For this work A. Yoshise, together with her coauthors (including S. Mizuno, the author of Chapter 5), received the Lancaster prize in 1993.

Chapter 9, Semidefinite Programming, gives an excellent introduction to this newly identified research field of convex programming. Semidefinite programs contain a linear objective and linear constraints, while a matrix of variables should be pos­itive semidefinite. It is proved that this program admits a self-concordant barrier function and is therefore solvable in polynomial time. Semidefinite programs arise among other places in relaxations of combinatorial optimization problems, in control theory, and in solving structural design problems. Basic concepts, algorithms, and applications are discussed. The authors are M.V. Ramana and P.M. Pardalos (University of Florida, Gainesville). Mo­takuri V. Ramana hails from India and he received his Ph.D. from The Johns Hopkins University in 1993. He wrote his doctoral dissertation on Multiquadratic and Semidefinite Programming problems. He developed the first algebraic polynomial size gap-free dual pro­gram for SDP, called the Extended Lagrange Slater Dual (ELSD), and has written several papers concerning geometrical, structural and complexity theoretic aspects of semidefinite programming. His other research interests include global and combinatorial optimization, graph theory and complexity theory. For some information about P. Pardalos' activities, see the information following the description of Chapter 12.

Chapter 10, Implementing Barrier Methods for Nonlinear Programming, proposes two algorithmic schemes for general nonlinear programs. The first is a pure barrier algorithm using modified barriers, while the second uses the classical logarithmic barrier and builds a way to generate variants of sequential quadratic programming methods. Implementation issues and some illustrative computational results are presented as well. The practical efficiency of IPMs for solving nonlinear problems is not yet as established as in the case of LP, and this paper is an important step in this direction. D.F. Shanno (RUTCOR, Rutgers University) is well known in the nonlinear optimization community for his classical work on Quasi-Newton methods. He was one of the authors of the OBI code, which was the first really efficient implementation of IPMs for LP. He and his coauthors received the Orchard-Hays prize of the Mathematical Programming Society in 1992 for their pioneering work in implementing IPMs. M.G. Breitfeld (Stuttgart, Germany) was, and E.M. Simantiraki (RUTCOR, Rutgers University) is Shanno's Ph.D. student. Both are known for their significant contributions in developing and implementing barrier methods for nonlinear programming.

Part III: Applications, Extensions contains three chapters. Chapter 11, Interior Point Methods for Combinatorial Optimization, surveys the applicability of IPMs in solving combinatorial optimization problems. The chapter describes the adaptation of IPMs to branch and cut methods, and also to potential

Page 18: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

xx INTERIOR POINT METHODS IN MATHEMATICAL PROGRAMMING

reduction algorithms specially designed to solve combinatorial problems by trans­forming them into nonconvex nonlinear problems. IPMs tailored to solve network optimization and multicommodity flow problems, including some IPM based cutting plane methods, are also discussed. J.E. Mitchell (Rensselaer Polytechnic Institute) received his Ph.D. from Cornell University. His work was the first attempt to use IPMs in combinatorial optimization. He has mainly worked in exploring the potential of IPMs in branch and cut algorithms.

Chapter 12, Interior Point Methods for Global Optimization, indicates the potential of IPMs in global optimization. As in the case of combinatorial optimization, most problems in global optimization are NP-hard. Thus to expect polynomiality results for such problems is not realistic. However, significant improvement in the quality of the obtained (possibly) local solution and improved solution time are frequently achieved. The paper presents potential reduction and affine scaling algorithms and lower bounding techniques for general nonconvex quadratic problems, including some classes of combinatorial optimization problems. It is easy to see that any nonlinear problem with polynomial constraints can be transformed to such quadratic problems. The authors P.M. Pardalos (University of Florida, Gainesville) and M.G.C. Resende (AT&T Research) are recognized experts in optimization. Pardalos is known as a leading expert in the field of global optimization and has written and/or edited over ten books in recent years. Resende is responsible for pioneering work in implementing IPMs for LP, network programming, combinatorial and global optimization problems.

Chapter 13, Interior Point Approaches for the VLSI Placement Problem, introduces the reader to an extremely important application area of optimization. Several optimization problems arise in VLSI (Very Large System Integration) chip design. Here two new placement models are discussed that lead to sparse LP and sparse convex quadratic programming problems respectively. The resulting problems are solved by IPMs. Computational results solving some real placement problems are presented. A. Vannelli and his Ph.D. students A. Kennings and P. Chin are working at the Electrical Engineering Department of the Waterloo University, Waterloo, Canada. Vannelli is known for his devoted pioneering work on applying exact optimization methods in VLSI design.

Page 19: Interior Point Methods of Mathematical Programming978-1-4613-3449-1/1.pdf · Mathematik und Statistik U niversitat W iirz burg 97074 Wiirzburg, Germany e-mail: jarre@mathematik.uni-wuerzburg.de

Preface XXI

Acknowledgements

I would like to thank my close colleagues D. den Hertog, B. Jansen, E. de Klerk, T. Luo, H. van Maaren, J. Mayer, A.J. Quist, C. Roos, J, Sturm, J.-Ph. Vial, J.P. Warners, S. Zhang for their help and continuous support. These individuals have provided countless useful discussions in the past years, have helped me to review the chapters of this book, and have helped me with useful comments of all sorts. I am also grateful to all the authors of this book for their cooperation and for their excellent work, to John Martindale and his assistants (Kluwer Academic Publishers) for their kind practical help, and to P. Pardalos, the managing editor of the series "Applied Optimization" for his deep interest in modern optimization methods and his constant encouragement. Professor Emil Klafszky (University of Technology, Budapest Hungary), my Ph.D. supervisor, had a profound personal influence on my interest, taste and insight in linear and nonlinear programming. Without this intellectual impulse I would probably never have become an active member of the mathematical programming community. Finally, but most of all, I thank my wife for all her love, patience, and support. Without her continuous support this book would never have been completed.

Tamas Terlaky May 1996, Delft, The Netherlands