56
Linear Programming Frequently Asked Questions Optimization Technology Center of Northwestern University and Argonne National Laboratory Posted at http://www-unix.mcs.anl.gov/otc/Guide/faq/linear- programming-faq.html Changes posted to Usenet newsgroup sci.op-research Date of this version: September 1, 2003 Q1. "What is Linear Programming ?" Q2. "Where is there good software to solve LP problems?" o "Free" codes o Commercial codes and modeling systems o Free demos of commercial codes Q3. "Oh, and we also want to solve it as an integer program ." Q4. "I wrote an optimization code. Where are some test models ?" Q5. "What is MPS format ?" Q6. Topics briefly covered: o Q6.1 : "What is a modeling language?" o Q6.2 : "How do I diagnose an infeasible LP model?" o Q6.3 : "I want to know the specific constraints that contradict each other." o Q6.4 : "I just want to know whether or not a feasible solution exists." o Q6.5 : "I have an LP, except it's got several objective functions." o Q6.6 : "I have an LP that has large almost- independent matrix blocks that are linked by a few constraints. Can I take advantage of this?" o Q6.7 : "I am looking for an algorithm to compute the convex hull of a finite number of points in n- dimensional space." o Q6.8 : "Are there any parallel LP codes?" o Q6.9 : "What software is there for Network models?"

· Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

  • Upload
    lynhu

  • View
    219

  • Download
    1

Embed Size (px)

Citation preview

Page 1: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Linear ProgrammingFrequently Asked Questions Optimization Technology Center ofNorthwestern University and Argonne National Laboratory

Posted at  http://www-unix.mcs.anl.gov/otc/Guide/faq/linear-programming-faq.html Changes posted to Usenet newsgroup sci.op-research Date of this version: September 1, 2003

Q1. "What is Linear Programming?" Q2. "Where is there good software to solve LP problems?"

o "Free" codes o Commercial codes and modeling systems o Free demos of commercial codes

Q3. "Oh, and we also want to solve it as an integer program." Q4. "I wrote an optimization code. Where are some test models?" Q5. "What is MPS format?" Q6. Topics briefly covered:

o Q6.1 : "What is a modeling language?" o Q6.2 : "How do I diagnose an infeasible LP model?" o Q6.3 : "I want to know the specific constraints that contradict each other." o Q6.4 : "I just want to know whether or not a feasible solution exists." o Q6.5 : "I have an LP, except it's got several objective functions." o Q6.6 : "I have an LP that has large almost-independent matrix blocks that are

linked by a few constraints. Can I take advantage of this?" o Q6.7 : "I am looking for an algorithm to compute the convex hull of a finite

number of points in n-dimensional space." o Q6.8 : "Are there any parallel LP codes?" o Q6.9 : "What software is there for Network models?" o Q6.10 : "What software is there for the Traveling Salesman Problem (TSP)?" o Q6.11 : "What software is there for cutting or packing problems?" o Q6.12 : "What software is there for linear programming under uncertainty?" o Q6.13 : "I need to do post-optimal analysis." o Q6.14 : "Do LP codes require a starting vertex?" o Q6.15 : "How can I combat cycling in the Simplex algorithm?"

Q7. "What references are there in this field?" o General reference o Textbooks o Presentations of LP modeling systems o Books containing source code o Additional books o Periodicals o Articles of interest

Page 2: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Q8. "What's available online in this field?" o Online sources of optimization services o Online sources of optimization software o Online sources of optimization information

Q9. "Who maintains this FAQ list?"

See also the following pagespertaining to mathematical programming and optimization modeling:

The related Nonlinear Programming FAQ. Harvey Greenberg's Mathematical Programming Glossary. The Optimization Online repository of reports and papers. The e-Optimization.community website. The NEOS Guide to optimization models and software. The Decision Tree for Optimization Software by H.D. Mittelmann and P. Spellucci. Software for Optimization: A Buyer's Guide by Robert Fourer. The OR/MS Today 2001 Linear Programming Software Survey. The NEOS Server for access to optimization solvers through the internet Hans Mittelmann's Benchmarks for Optimization Software. The Computational Optimization and Applications Software Forum

. . . and don't forget the Web search engines! Services such as google, go/infoseek, lycos, excite, and altavista can be surprisingly helpful in finding pages on particular problem types or application areas.

Q1. "What is Linear Programming?"

A: (For rigorous definitions and theory, which are beyond the scope of this document, the interested reader is referred to the many LP textbooks in print, a few of which are listed in the references section.) A Linear Program (LP) is a problem that can be expressed as follows (the so-called Standard Form): minimize cx subject to Ax = b x >= 0where x is the vector of variables to be solved for, A is a matrix of known coefficients, and c and b are vectors of known coefficients. The expression "cx" is called the objective function, and the equations "Ax=b" are called the constraints. All these entities must have consistent dimensions, of course, and you can add "transpose" symbols to taste. The matrix A is generally not square, hence you don't solve an LP by just inverting A. Usually A has more columns than rows, and Ax=b is therefore quite likely to be under-determined, leaving great latitude in the choice of x with which to minimize cx. The word "Programming" is used here in the sense of "planning"; the necessary relationship to computer programming was incidental to the choice of name. Hence the phrase "LP program" to refer to a piece of

Page 3: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

software is not a redundancy, although I tend to use the term "code" instead of "program" to avoid the possible ambiguity. Although all linear programs can be put into the Standard Form, in practice it may not be necessary to do so. For example, although the Standard Form requires all variables to be non-negative, most good LP software allows general bounds l <= x <= u, where l and u are vectors of known lower and upper bounds. Individual elements of these bounds vectors can even be infinity and/or minus-infinity. This allows a variable to be without an explicit upper or lower bound, although of course the constraints in the A-matrix will need to put implied limits on the variable or else the problem may have no finite solution. Similarly, good software allows b1 <= Ax <= b2 for arbitrary b1, b2; the user need not hide inequality constraints by the inclusion of explicit "slack" variables, nor write Ax >= b1 and Ax <= b2 as two separate constraints. Also, LP software can handle maximization problems just as easily as minimization (in effect, the vector c is just multiplied by -1). The importance of linear programming derives in part from its many applications (see further below) and in part from the existence of good general-purpose techniques for finding optimal solutions. These techniques take as input only an LP in the above Standard Form, and determine a solution without reference to any information concerning the LP's origins or special structure. They are fast and reliable over a substantial range of problem sizes and applications. Two families of solution techniques are in wide use today. Both visit a progressively improving series of trial solutions, until a solution is reached that satisfies the conditions for an optimum. Simplex methods, introduced by Dantzig about 50 years ago, visit "basic" solutions computed by fixing enough of the variables at their bounds to reduce the constraints Ax = b to a square system, which can be solved for unique values of the remaining variables. Basic solutions represent extreme boundary points of the feasible region defined by Ax = b, x >= 0, and the simplex method can be viewed as moving from one such point to another along the edges of the boundary. Barrier or interior-point methods, by contrast, visit points within the interior of the feasible region. These methods derive from techniques for nonlinear programming that were developed and popularized in the 1960s by Fiacco and McCormick, but their application to linear programming dates back only to Karmarkar's innovative analysis in 1984. The related problem of integer programming (or integer linear programming, strictly speaking) requires some or all of the variables to take integer (whole number) values. Integer programs (IPs) often have the advantage of being more realistic than LPs, but the disadvantage of being much harder to solve. The most widely used general-purpose techniques for solving IPs use the solutions to a series of LPs to manage the search for integer solutions and to prove optimality. Thus most IP

Page 4: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

software is built upon LP software, and this FAQ applies to problems of both kinds. Linear and integer programming have proved valuable for modeling many and diverse types of problems in planning, routing, scheduling, assignment, and design. Industries that make use of LP and its extensions include transportation, energy, telecommunications, and manufacturing of many kinds. A sampling of applications can be found in many LP textbooks, in books on LP modeling systems, and among the application cases in the journal Interfaces.

Q2. "Where is there good software to solve LP problems?"

A: Thanks to the advances in computing of the past decade, linear programs in a few thousand variables and constraints are nowadays viewed as "small". Problems having tens or hundreds of thousands of continuous variables are regularly solved; tractable integer programs are necessarily smaller, but are still commonly in the hundreds or thousands of variables and constraints. The computers of choice for linear and integer programming applications are Pentium-based PCs and the several varieties of Unix workstations. There is more to linear programming than optimal solutions and number-crunching, however. This can be appreciated by observing that modern LP software comes in two related but very different kinds of packages:

Algorithmic codes are devoted to finding optimal solutions to specific linear programs. A code takes as input a compact listing of the LP constraint coefficients (the A, b, c and related values in the standard form) and produces as output a similarly compact listing of optimal solution values and related information.

Modeling systems are designed to help people formulate LPs and analyze their solutions. An LP modeling system takes as input a description of a linear program in a form that people find reasonably natural and convenient, and allows the solution output to be viewed in similar terms; conversion to the forms requried by algorithmic codes is done automatically. The collection of statement forms for the input is often called a modeling language.

Most modeling systems support a variety of algorithmic codes, while the more popular codes can be used with many different modeling systems. Because packages of the two kinds are often bundled for convenience of marketing or operation, the distinction between them is sometimes obscured, but it is important to keep in mind when attempting to sort through the many alternatives available. Large-scale LP algorithmic codes rely on general-structure sparse matrix techniques and numerous other refinements developed through years of experience. The fastest and most reliable codes thus represent

Page 5: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

considerable development effort, and tend to be expensive except in very limited demonstration or "student" versions. Those codes that are free -- to all, or at least for research and teaching -- tend to be somewhat less robust, though they are still useful for many problems. The ability of a code to solve any particular class of problems cannot easily be predicted from problem size alone; some experimentation is usually necessary to establish difficulty. Large-scale LP modeling systems are commercial products virtually without exception, and tend to be as expensive as the commercial algorithmic codes (again with the exception of small demo versions). They vary so greatly in design and capability that a description in words is adequate only to make a preliminary decision among them; your ultimate choice is best guided by using each candidate to formulate a model of interest. Listed below are summary descriptions of available free codes, and a tabulation of many commercial codes and modeling systems for linear (and integer) programming. A list of free demos of commercial software appears at the end of this section. Another useful source of information is the Optimization Software Guide by Jorge More' and Stephen Wright. It contains references to about 75 available software packages (not all of them just LP), and goes into more detail than is possible in this FAQ; see in particular the sections on "linear programming" and on "modeling languages and optimization systems." An updated Web version of this book is available on the NEOS Guide. Another good soruce of feature summaries and contact information is the Linear Programming Software Survey compiled by OR/MS Today (which also has the largest selection of advertisements for optimization software). Much information can also be obtained through the websites of optimization software developers, many of which are identified in the writeup and tables below. To provide some idea of the relative performance of LP codes, a collection of benchmark results for optimization software is being compiled by Hans Mittelmann of Arizona State University. It includes tests of numerous simplex and interior-point codes for linear programming as well as branch-and-bound codes for linear integer programming; both commercial packages and downloadable free implementations are included. (Many nonlinear programming benchmarks are also available at this site.) When evaluating any performance comparison, whether performed by a customer, vendor, or disinterested third party, keep in mind that all high-quality codes provide options that offer superior performance on certain difficult kinds of LP or IP problems. Benchmark studies of the "default settings" of codes will fail to reflect the power of any optional settings that are applicable.

"Free" codes

Page 6: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Some of these programs require registration or payment for some (especially commercial) uses. Conditions of use are also subject to change. It is a good practice to check a code's "readme" file or introductory documentation for restrictions before committing to use it. These codes are not as fast or robust on average as the commercial products, but they're a a reasonable first try if you're not sure what level of power you need. Based on the simplex method: lp_solve is a downloadable code for linear and mixed-integer programming, currently maintained by Peter Notebaert ([email protected]). Version 4.0 is now available, under the Lesser GNU Public License; there is also a program, mps2eq_0.2.tar.gz, that converts data files from MPS format to lp_solve's own input format. Source kits and compiled versions can be obtained from a download directory. GLPK (GNU Linear Programming Kit) is a C package that includes simplex (and also primal-dual interior point) methods for linear programming, a branch-and-bound implementation for integer programming, and a translator for the GNU MathProg language (a subset of AMPL). It is made available as open source under the GNU General Program License; Andrew Makhorin ([email protected]) is the developer and maintainter. LP-Optimizer is a simplex-based code for linear and integer programs, written by Markus Weidenauer ([email protected]). Free Borland Pascal 7.0 and Borland Delphi 4 source is available for downloading, as are executables for DOS and for Windows (95 or later). SoPlex is an object-oriented implementation of the primal and dual simplex algorithms, developed by Roland Wunderling. Source code is available free for research uses at noncommercial and academic institutions. Among the SLATEC library routines is a Fortran sparse implementation of the simplex method, SPLP. Its documentation states that it can solve LP models of "at most a few thousand constraints and variables". Based on interior-point methods: The Optimization Technology Center at Argonne National Laboratory and Northwestern University has developed PCx, an interior-point code that is freely available for downloading. PCx is available in Fortran or C source or a variety of Windows and Unix executables, with an optional Java-based GUI interface. Input can be specified in MPS form or by use of the AMPL modeling language. Csaba Meszaros ([email protected]) has written BPMPD, an interior-point code for linear and convex quadratic programs. A demonstration version, which solves problems of any size but does not report optimal values of the variables for problems larger than about 500 x 500, is available as a Windows95/NT executable or DLL. Separately, a large variety of Unix binaries for Linux and four workstation platforms are available for downloading. Jacek Gondzio ([email protected]) has developed the interior-point LP (and convex QP) solver HOPDM. Several papers (also available at the

Page 7: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

HOPDM website) detail the features of this solver, which include automatic selection of multiple centrality correctors and factorization method, and a "warm start" feature for taking advantage of known solutions. A public-domain Fortran version (2.13, LP only) can be downloaded, and a newer C version (2.30) is available on request to the developer. If you want to solve an LP without downloading a code to your own machine, you can execute many of these interior-point codes (as well as varied commercial LP codes) through the NEOS Server. Modeling systems: LPL is a mathematical modeling language for formulating and maintaining linear and mixed-integer programs. It is particularly notable for its ability to also handle a variety of logical constraints, which are translated symbolically into algebraic constraints using 0-1 variables. You can download the software and documentation free of charge. Other software of interest: COIN, a Common Optimization INterface for Operations Research, is an "open source" repository established with support from IBM Corporation. Source code initially available includes a parallel branch-cut-price framework, a cut generation library, an implementation of the Volume Algorithm for fast approximate solutions to combinatorial problems, and an open solver interface layer. ABACUS is a C++ class library that "provides a framework for the implementation of branch-and-bound algorithms using linear programming relaxations that can be complemented with the dynamic generation of cutting planes or columns" (branch-and-cut and/or branch-and-price). It relies on CPLEX, SoPlex, or Xpress-MP to solve linear programs. Further information is available from Stefan Thienel, [email protected]. Various small-scale implementations are mainly intended for instructional purposes.

Robert Vanderbei of Princeton has developed Java-based tools for facilitating simplex pivots and facilitating network simplex pivots, and well as a variety of Java applets that test students on their knowledge of various simplex-based methods.

A web-based Linear Program Solver with Simplex, part of the RIOT project at Berkeley, appears to be useful for solving small models that can be entered by hand.

The NEOS Guide developed by Northwestern and Argonne National Laboratory offers a Java-based Simplex Tool that demonstrates the workings of the simplex method on small user-entered problems.

The Operations Research Laboratory at Seoul National University, Korea offers C source for large-scale Linear Programming software (both Simplex and Barrier) and for numerous more specialized optimization problems. Will Naylor has a collection of software he calls WNLIB. Routines of interest include a dense-matrix simplex method for linear programming

Page 8: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

(with anti-cycling and numerical stability "hacks") and a sparse-matrix transportation/assignment problem routine. (WNLIB also contains routines pertaining to nonlinear optimization.) A code known as lp is Mike Hohmeyer's C implementation of Raimund Seidel's O(d! n) time linear programming algorithm. It's reputed to be extremely fast in low dimensions (say, d < 10), so that it's appropriate for a variety of geometric problems, especially with very large numbers of constraints. The next several suggestions are for codes that are severely limited by the dense vector and matrix algebra employed in their implementations; they may be OK for models with (on the order of) 100 variables and constraints, but it's unlikely they will be satisfactory for larger models. In the words of Matt Saltzman ([email protected]):

The main problems with these codes have to do with scaling, use of explicit inverses and lack of reinversion, and handling of degeneracy. Even small problems that are ill-conditioned or degenerate can bring most of these tableau codes to their knees. Other disadvantages for larger problems relate to sparsity, pricing, and maintaining the complete nonbasic portion of the tableau. But for small, dense problems these difficulties may not be serious enough to prevent such codes from being useful, or even preferable to more "sophisticated" sparse codes. In any event, use them with care.

For DOS/PC users, there is a "friendly Linear Programming and Linear Goal Programming" code called LINSOLVE, developed by Prof. Timo Salmi ([email protected]).

Also on the garbo server is a LP 2.61, a shareware linear and integer programming code of Cornel Huth, distributed as PC binaries. It accepts text and spreadsheet files as input.

There is an ACM TOMS routine for LP, #552. This routine was designed for fitting data to linear constraints using an L1 norm, but it uses a modification of the Simplex Method and could presumably be modified to satisfy LP purposes.

There are books that contain source code for the Simplex Method. See the section on references. You should not expect such code to be robust. In particular, you can check whether it uses a 2-dimensional array for the A-matrix; if so, it is surely using the tableau Simplex Method rather than sparse methods, and Saltzman's comments will apply.

OR-Objects is a Java class library that includes objects for dense linear programming (and other methods common in operations research).

Pysimplex contains an implementation of the simplex method in the object-oriented language Python, together with modules that allow construction of models in a straightforward symbolic manner.

For Macintosh users there is relatively little available, but here are a few possibilities:

Mac OS X binaries for the student version of AMPL and three solvers are available from netlib.

Page 9: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

LinPro is a freely available but strictly small-scale linear programming package. It has its own input format and does not support MPS format.

The 2nd edition of Wayne L. Winston's Introduction to Mathematical Programing: Applications and Algorithms is listed by Duxbury Press as being available with the Macintosh LINDO LP solver.

Stephen F. Gale ([email protected]) writes: Available at my website is a fairly simple Simplex Solver written for Turbo Pascal

3.0. The original algorithm is from the book "Some Common BASIC Programs" by Lon Poole and Mary Borchers (ISBN 0-931988-06-3). However, I revised it considerably when I converted it to Pascal. I then added Sensitivity Analysis based on the book "The Operations Research Problem Solver" (ISBN 0-87891-548-6). I have tested the program on over 30 textbook problems, but never used it for real life applications. If someone finds a problem with the program, I would be pleased to correct it. I would also appreciate knowing how the program was used.

The following suggestions may represent low-cost ways of solving LPs if you already have certain software available to you.

All of the most popular spreadsheet programs come with a limited linear/integer programming solver as a feature or add-in. A much broader range of more powerful solvers are available for separate purchase from LINDO Systems and Frontline Systems. Some of these solvers give special attention to common spreadsheet functions such as MIN, IF, and LOOKUP that can be handled by linear or integer programming approaches.

QSB for Windows by Yih-Long Chang, published by Wiley, has an LPf module among its routines.

If you have access to a commercial math library, such as SAS, IMSL or NAG, you may be able to use an LP routine from there.

If you have MATLAB, you can run a number of useful optimization packages that provide some linear programming features:

o The MATLAB Optimization Toolbox. o The TOMLAB Optimization Environment provides MATLAB connections

to MINOS for large-scale linear programming and Xpress-MP and CPLEX for large-scale linear and integer programming, as well as to these and other codes for a variety of nonlinear programming problems. (See listing under modeling systems below.)

o milp.m , a routine that uses the Optimization Toolbox to solve mixed-integer linear programs.

o LPMEX , an interface to let you run lp_solve from MATLAB. o MATLAB interfaces to the LOQO and LIPSOL, and MOSEK solvers.

The MAPLE and Mathematica packages for mathematical computation also provide some linear programming routines; you can get more information by searching at their websites.

Commercial codes and modeling systems

Page 10: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

If your models prove to be too difficult for free or add-on software to handle, then you may have to consider acquiring a commercial LP code. Dozens of such codes are on the market. There are many considerations in selecting an LP code. Speed is important, but LP is complex enough that different codes go faster on different models; you won't find a "Consumer Reports" article to say with certainty which code is THE fastest. I usually suggest getting benchmark results for your particular type of model if speed is paramount to you. Benchmarking can also help determine whether a given code has sufficient numerical stability for your kind of models. Other questions you should answer: Can you use a stand-alone code, or do you need a code that can be used as a callable library, or do you require source code? Do you want the flexibility of a code that runs on many platforms and/or operating systems, or do you want code that's tuned to your particular hardware architecture (in which case your hardware vendor may have suggestions)? Is the choice of algorithm (Simplex, Interior-Point) important to you? Do you need an interface to a spreadsheet code? Is the purchase price an overriding concern? If you are at a university, is the software offered at an academic discount? How much hotline support do you think you'll need? There is usually a large difference in LP codes, in performance (speed, numerical stability, adaptability to computer architectures) and in features, as you climb the price scale. Information on commercial systems is provided in two tables below. The first lists packages that are primarily algorithmic codes, and the second lists modeling systems. For a more extensive summary, take a look at the Linear Programming Software Survey in the August 1999 issue of OR/MS Today. In the tables below, product names are linked to product or developer websites where known. Under "Platform" is an indication of common environments in which the code runs, with the choices being Microsoft Windows (PC), Macintosh OS (M), and Unix/Linux (U). The codes under "Features" are as follows: S=Simplex Q=Quadratic B=Barrier G=General Nonlinear N=Network I=Integer/Combinatorial All product information is subject to change, and some delay may occur before changes are reflected in this table. Consult the products' developers or vendors for definitive information. SolverProduct Features Platform Phone (+1)  E-mail address C-WHIZ SI PC U 703-412-3201 [email protected] CPLEX SBINQ PC U 800-367-4564

775-831-7744 [email protected]

FortMP SBIQ PC U +44 18-9525- [email protected]

Page 11: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

6484

HI-PLEX S PC U +44 20-7594-8334

[email protected]

HS/LP SI PC 201-627-1424 [email protected] ILOG Opt Suite SBINQ PC U 800-367-4564

775-831-7744 [email protected]

LAMPS SI PC U +44 20-8877-3030

[email protected]

LINDO API SBI PC U 312-988-7422 [email protected] LOQO IQG PC U 609-258-0876 [email protected] LPS-867 SI PC U 609-737-6800 [email protected] MINOS SG PC 650-856-1695 [email protected]

MINTO I U 404-894-6287 [email protected]

MOSEK SBQG PC U +45 3917-9907 [email protected] MPSIII SIN PC U 703-412-3201

+352 5313-2455 [email protected] [email protected]

OSL SBINQ PC U 914-435-6685 [email protected] SAS/OR SINQG PC M U 919-677-8000

SCICONIC SI PC U +44 19-0828-4188 [email protected]

Solv Engine DLL

SIQG PC 888-831-0333 775-831-0300

[email protected]

SOPT SBIQG PC U 732-264-4700 +81 3-5966-1220

[email protected]

XA SI PC M U 626-441-1565 [email protected] Xpress-MP SBIQ PC U 201-567-9445

+44 1604-858993 [email protected]

ModelingProduct Platform Phone (+1)  E-mail address AIMMS PC +31 23-5511512 [email protected] AMPL PC U see list [email protected]

ANALYZE PC 303-796-7830 [email protected]

DecisionPRO PC 919-859-4101 [email protected] DATAFORM PC U 703-412-3201 [email protected] EZMod PC 866-811-4541

418-653-0853 [email protected]

GAMS PC U 202-342-0180 [email protected]

Page 12: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

LINGO PC U 800-441-2378 [email protected] LP-TOOLKIT PC U +33 1-39071240 [email protected] MathPro PC U 301-951-9006 [email protected] MIMI PC U 908-464-8300 [email protected] MINOPT PC U 609-258-3097 [email protected]

MODLER PC U 303-796-7830 [email protected]

MPL PC 703-522-7900 [email protected] OML PC U 703-412-3201 [email protected] OMNI PC U 973-627-1424 [email protected] OPL Studio PC U 800-367-4564

775-831-7744 [email protected]

Prem Solv Platform PC 888-831-0333 775-831-0300

[email protected]

SolveLPx PC +47 99517555 mailto:[email protected]

TOMLAB PC U M +46 21-804760 530-629-1110

[email protected] [email protected]

What's Best! PC U M 800-441-2378 [email protected] Xpress-Mosel PC U 201-567-9445

+44 1604-858993 [email protected]

Free demos of commercial codes

An increasing number of commercial LP software developers are making demo or academic versions available for downloading through websites or as add-ons to book packages. Typically these versions are limited in the size of problem they accept or the length of time that they will operate, or are made available only for "academic use" (mainly research or teaching at universities). Nevertheless, they have most or all of the features of the full versions. Most run under several variations of Microsoft Windows on PCs, and/or certain Unix workstations; check the relevant web pages for details. Downloadable free demos are available for:

AIMMS AMPL with CPLEX and MINOS ANALYZE , MODLER and RANDMOD C-WHIZ and OML LINDO , LINGO and What's Best! LOQO with a built-in AMPL interface MOSEK API and MATLAB toolbox MPL with CPLEX OPL Studio with the ILOG Optimization Suite (including CPLEX) TOMLAB with MINOS, CPLEX, Xpress-MP, and other solvers running from

Matlab. Xpress-IVE with the Xpress-MP Optimizer and Xpress-Mosel

Page 13: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Books that are packaged with demo software include: A. Brooke, D. Kendrick and A. Meeraus, GAMS: A Users' Guide, Thomson

Learning/Duxbury Press, ISBN 0-894-26215-7. R. Fourer, D.M. Gay and B.W. Kernighan, AMPL: A Modeling Language for

Mathematical Programming, Thomson Learning/Duxbury Press, ISBN 0-534-38809-4.

H.J. Greenberg, Modeling by Object-Driven Linear Elemental Relations: A User's Guide for MODLER, Kluwer Academic Publishers, ISBN 0-792-39323-6.

L. Schrage, Optimization Modeling with LINGO, LINDO Systems, ISBN 1-893-35500-4.

Many developers are also willing to arrange for you to "borrow" copies of their full-featured versions for purposes of evaluation. Details vary, however, so you'll have to check with each vendor whose product you're interested in.

Q3. "Oh, and we also want to solve it as an integer program."

A: Integer LP models are ones whose variables are constrained to take integer or whole number (as opposed to fractional) values. It may not be obvious that integer programming is a very much harder problem than ordinary linear programming, but that is nonetheless the case, in both theory and practice. Integer models are known by a variety of names and abbreviations, according to the generality of the restrictions on their variables. Mixed integer (MILP or MIP) problems require only some of the variables to take integer values, whereas pure integer (ILP or IP) problems require all variables to be integer. Zero-one (or 0-1 or binary) MIPs or IPs restrict their integer variables to the values zero and one. (The latter are more common than you might expect, because many kinds of combinatorial and logical restrictions can be modeled through the use of zero-one variables.) For the sake of generality, the following disucssion uses the term MIP to refer to any kind of integer LP problem; the other kinds can be viewed as special cases. MIP, in turn, is a particular member of the class of combinatorial or discrete optimization problems. In fact the problem of determining whether a MIP has an objective value less than a given target is a member of the class of "NP-complete" problems, all of which are very hard to solve (at least as far as anyone has been able to tell). Since any NP-complete problem is reducible to any other, virtually any combinatorial problem of interest can be attacked in principle by solving some equivalent MIP. This approach sometimes works well in practice, though it is by no means infallible.

Page 14: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

People are sometimes surprised to learn that MIP problems are solved using floating point arithmetic. Most available general-purpose large-scale MIP codes use a procedure called "branch-and-bound" to search for an optimal integer solution by solving a sequence of related LP "relaxations" that allow some fractional values. Good codes for MIP distinguish themselves primarily by solving shorter sequences of LPs, and secondarily by solving the individual LPs faster. (The similarities between successive LPs in the "search tree" can be exploited to speed things up considerably.) Even more so than with regular LP, a costly commercial code may prove its value if your MIP model is difficult. Another solution approach known generally as constraint logic programming or constraint programming (CP) has drawn increasing interest of late. Having their roots in studies of logical inference in artificial intelligence, CP codes typically do not proceed by solving any LPs. As a result, compared to branch-and-bound they search "harder" but faster through the tree of potential solutions. Their greatest advantage, however, lies in their ability to tailor the search to many constraint forms that can be converted only with difficulty to the form of an integer program; their greatest success tends to be with "highly combinatorial" optimization problems such as scheduling, sequencing, and assignment, where the construction of an equivalent IP would require the definition of large numbers of zero-one variables. Notable constraint programming codes include CHIP, ECLiPSe, GNU Prolog, IF/Prolog, ILOG Solver, Koalog Constraint Solver, MOzart, and SICStus Prolog. Much more information can be found in the Constraints Archive, which contains the the comp.constraints newsgroup FAQ, pages of constraint-related pointers, source code for various systems, benchmarks, a directory of people interested in constraints, constraint bibliographies, and a collection of on-line papers. The IP and CP approaches are not so far apart as they may seem, particularly now that each is being adapted to incorporate some of the strengths of the other. Some fundamental connections are described in [Chandru and Hooker] and [Hooker]. Whatever your solution technique, you should be prepared to devote far more computer time and memory to solving a MIP problem than to solving the corresponding LP relaxation. (Or equivalently, you should be prepared to solve much smaller MIP problems than LP problems using a given amount of computer resources.) To further complicate matters, the difficulty of any particular MIP problem is hard to predict (in advance, at least!). Problems in no more than a hundred variables can be challenging, while others in tens of thousands of variables solve readily. The best explanations of why a particular MIP is difficult often rely on some insight into the system you are modeling, and even then tend to appear only after a lot of computational tests have been run. A related observation is that the way you formulate your model can be as important as the actual choice of solver.

Page 15: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Thus a MIP problem with hundreds of variables (or more) should be approached with a certain degree of caution and patience. A willingness to experiment with alternative formulations and with a MIP code's many search options often pays off in greatly improved performance. In the hardest cases, you may wish to abandon the goal of a provable optimum; by terminating a MIP code prematurely, you can often obtain a high-quality solution along with a provable upper bound on its distance from optimality. A solution whole objective value is within some fraction of 1% of optimal may be all that is required for your purposes. (Indeed, it may be an optimal solution. In contrast to methods for ordinary LP, procedures for MIP may not be able to prove a solution to be optimal until long after they have found it.) Once one accepts that large MIP models are not typically solved to a proved optimal solution, that opens up a broad area of approximate methods, probabilistic methods and heuristics, as well as modifications to B&B. See [Balas] which contains a useful heuristic for 0-1 MIP models. See also the brief discussion of Genetic Algorithms and Simulated Annealing in the Nonlinear Programming FAQ. A major exception to this somewhat gloomy outlook is that there are certain models whose LP solution always turns out to be integer, assuming the input data is integer to start with. In general these models have a "unimodular" constraint matrix of some sort, but by far the best-known and most widely used models of this kind are the so-called pure network flow models. It turns out that such problems are best solved by specialized routines, usually based on the simplex method, that are much faster than any general-purpose LP methods. See the section on Network models for further information. Commercial MIP codes are listed with the commercial LP codes and modeling systems above. The following are notes on some publicly available codes for MIP problems.

The public domain code lp_solve, mentioned earlier, accepts MIP models.

OPBDP is a C++ implementation by Peter Barth of an implicit enumeration algorithm for solving linear 0-1 optimization problems. The developer states that the algorithm compares well with commercial linear programming-based branch-and-bound on a variety of standard 0-1 integer programming benchmarks; exploiting the logical structure of a problem, using OPBDP, is said to yield good performance on problems where exploiting the polyhedral structure seems to be inefficient and vice versa.

I have seen mention made of algorithm 333 in the Collected Algorithms from CACM, though I'd be surprised if it was robust enough to solve large models. I am not aware of this algorithm being available online anywhere.

In [Syslo] is code for 28 algorithms, most of which pertain to some aspect of Discrete Optimization.

Page 16: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Omega analyzes systems of linear equations in integer variables. It does not solve optimization problems, except in the case that a model reduces completely, but its features could be useful in analyzing and reducing MIP models.

Q4. "I wrote an optimization code. Where are some test models?"

A: If you want to try out your code on some real-world linear programs, there is a very nice collection of small-to-medium-size ones, with a few that are rather large, popularly known as the Netlib collection (although Netlib consists of much more than just LP). Also on netlib is a collection of infeasible LP models. See the readme file for a listing and further information. The test problem files (after you uncompress them) are in a format called MPS, which is described in another section of this document. Note that, when you receive a model, it may be compressed both with the Unix compression utility (use uncompress if the file name ends in .Z) and with an LP-specific program (grab either emps.f or emps.c at the same time you download the model, then compile/run the program to undo the compression). There is a collection of mixed integer (linear) programming (or MIP) models, called MIPLIB, housed at Rice University. TSPLIB is a library of traveling salesman and related problems, including vehicle routing problems. For network flow problems, there are some generators and instances collected at DIMACS. The NETGEN and GNETGEN generator can be downloaded from netlib. Generators and instances for multicommodity network flow problems are maintained by the Operations Research group in the Department of Computer Science at the University of Pisa. The commercial modeling language GAMS comes with about 160 test models, which you might be able to test your code with. There are also pages containing pointers to numerous examples in AMPL, MPL, and OPL. There is a collection called MP-TESTDATA available at Konrad-Zuse-Zentrum fuer Informations-technik Berlin (ZIB). This directory contains various subdirectories, each of which has a file named "index" containing further information. Indexed at this writing are: assign, cluster, lp, ip, matching, maxflow, mincost, set-parti, steiner-tree, tsp, vehicle-rout, and generators. John Beasley of the Imperial College Management School maintains the OR-Library, which lists linear programming and over 3 dozen other categories of optimization test problems. Finally, Martin Chlond's pages on Integer Programming in Recreational Mathematics provide a variety of challenges for both modelers and software.

Page 17: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Q5. "What is MPS format?"

A: MPS format was named after an early IBM LP product and has emerged as a de facto standard ASCII medium among most of the commercial LP codes. Essentially all commercial LP codes accept this format, but if you are using public domain software and have MPS files, you may need to write your own reader routine for this. It's not too hard. See also the comment regarding the lp_solve code, in another section of this document, for the availability of an MPS reader. The main things to know about MPS format are that it is column oriented (as opposed to entering the model as equations), and everything (variables, rows, etc.) gets a name. The MIPLIB site provides a concise summary of MPS format, and a more detailed description is given in [Murtagh]. MPS is a very old format, so it is set up as though you were using punch cards, and is not free format. Fields start in column 1, 5, 15, 25, 40 and 50. Sections of an MPS file are marked by so-called header cards, which are distinguished by their starting in column 1. Although it is typical to use upper-case throughout the file (like I said, MPS has long historical roots), many MPS-readers will accept mixed-case for anything except the header cards, and some allow mixed-case anywhere. The names that you choose for the individual entities (constraints or variables) are not important to the solver; you should pick names that are meaningful to you, or will be easy for a post-processing code to read. Here is a little sample model written in MPS format (explained in more detail below): NAME TESTPROBROWS N COST L LIM1 G LIM2 E MYEQNCOLUMNS XONE COST 1 LIM1 1 XONE LIM2 1 YTWO COST 4 LIM1 1 YTWO MYEQN -1 ZTHREE COST 9 LIM2 1 ZTHREE MYEQN 1RHS RHS1 LIM1 5 LIM2 10 RHS1 MYEQN 7BOUNDS UP BND1 XONE 4 LO BND1 YTWO -1 UP BND1 YTWO 1ENDATA

Page 18: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

For comparison, here is the same model written out in an equation-oriented format: Optimize COST: XONE + 4 YTWO + 9 ZTHREESubject To LIM1: XONE + YTWO < = 5 LIM2: XONE + ZTHREE > = 10 MYEQN: - YTWO + ZTHREE = 7Bounds 0 < = XONE < = 4-1 < = YTWO < = 1EndStrangely, there is nothing in MPS format that specifies the direction of optimization. And there really is no standard "default" direction; some LP codes will maximize if you don't specify otherwise, others will minimize, and still others put safety first and have no default and require you to specify it somewhere in a control program or by a calling parameter. If you have a model formulated for minimization and the code you are using insists on maximization (or vice versa), it may be easy to convert: just multiply all the coefficients in your objective function by (-1). The optimal value of the objective function will then be the negative of the true value, but the values of the variables themselves will be correct. The NAME card can have anything you want, starting in column 15. The ROWS section defines the names of all the constraints; entries in column 2 or 3 are E for equality rows, L for less-than ( <= ) rows, G for greater-than ( >= ) rows, and N for non-constraining rows (the first of which would be interpreted as the objective function). The order of the rows named in this section is unimportant. The largest part of the file is in the COLUMNS section, which is the place where the entries of the A-matrix are put. All entries for a given column must be placed consecutively, although within a column the order of the entries (rows) is irrelevant. Rows not mentioned for a column are implied to have a coefficient of zero. The RHS section allows one or more right-hand-side vectors to be defined; most people don't bother having more than one. In the above example, the name of the RHS vector is RHS1, and has non-zero values in all 3 of the constraint rows of the problem. Rows not mentioned in an RHS vector would be assumed to have a right-hand-side of zero. The optional BOUNDS section lets you put lower and upper bounds on individual variables (no * wild cards, unfortunately), instead of having to define extra rows in the matrix. All the bounds that have a given name in column 5 are taken together as a set. Variables not mentioned in a given BOUNDS set are taken to be non-negative (lower bound zero, no upper bound). A bound of type UP means an upper bound is applied to the variable. A bound of type LO means a lower bound is applied. A bound type of FX ("fixed") means that the variable has upper and lower bounds equal to a single value. A bound type of FR ("free") means the variable has neither lower nor upper bounds.

Page 19: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

There is another optional section called RANGES that I won't go into here. The final card must be ENDATA, and yes, it is spelled funny.

Q6. Topics briefly covered:

Q6.1: "What is a modeling language?"

A: There is more to linear programming (or integer programming) than optimal solutions and number-crunching. This can be appreciated by observing that modern LP software comes in two related but very different kinds of packages:

Algorithmic codes are devoted to finding optimal solutions to specific linear programs. A code takes as input a compact listing of the LP constraint coefficients (the A, b, c and related values in the standard form) and produces as output a similarly compact listing of optimal solution values and related information.

Modeling systems are designed to help people formulate LPs and analyze their solutions. An LP modeling system takes as input a description of a linear program in a form that people find reasonably natural and convenient, and allows the solution output to be viewed in similar terms; conversion to the forms requried by algorithmic codes is done automatically. The collection of statement forms for the input is often called a modeling language.

Most modeling systems support a variety of algorithmic codes, while the more popular codes can be used with many different modeling systems. Because packages of the two kinds are often bundled for convenience of marketing or operation, the distinction between them is sometimes obscured, but it is important to keep in mind when sorting through the many possibilities. See under Commercial Codes and Modeling Systems elsewhere in this FAQ for a list of modeling systems available. There are no free ones of note, but many do offer free demo versions. Common alternatives to modeling languages and systems include spreadsheet front ends to optimization, and custom optimization applications written in general-purpose programming languages. You can find a discussion of the pros and cons of these approaches in What Modeling Tool Should I Use? on the Frontline Systems website.

Q6.2: "How do I diagnose an infeasible LP model?"

A: A linear program is infeasible if there exists no solution that satisfies all of the constraints -- in other words, if no feasible solution can be constructed. Since any real operation that you are modeling must remain within the constraints of reality, infeasibility most often indicates an error of some kind. Simplex-based LP software efficiently detects when no feasible solution is possible; some early interior-point codes could not detect an infeasible situation as reliably, but remedies for this flaw have been introduced.

Page 20: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

The source of infeasibility is often difficult to track down. It may stem from an error in specifying some of the constraints in your model, or from some wrong numbers in your data. It can be the result of a combination of factors, such as the demands at some customers being too high relative to the supplies at some warehouses. Upon detecting infeasibility, LP codes typically show you the most recent infeasible solution that they have encountered. Sometimes this solution provides a good clue as to the source of infeasibility. If it fails to satisfy certain capacity constraints, for example, then you would do well to check whether the capacity is sufficient to meet the demand; perhaps a demand number has been mistyped, or an incorrect expression for the capacity has been used in the capacity constraint, or or the model simply lacks any provision for coping with increasing demands. More often, unfortunately, LP codes respond to an infeasible problem by returning a meaninglessly infeasible solution, such as one that violates material balances. A more useful approach is to forestall meaningless infeasibilities by explicitly modeling those sources of infeasibility that you view as realistic. As a simple example, you could add a new "slack" variable on each capacity constraint, having a very high penalty cost. Then infeasibilities in your capacities would be signalled by positive values for these slacks at the optimal solution, rather than by a mysterious lack of feasibility in the linear program as a whole. Many modelers recommend the use of "soft constraints" of this kind in all models, since in reality many so-called constraints can be violated for a sufficiently high price. Modeling approaches that use such constraints have a number of names, most notably "goal programming" and "elastic programming".

Q6.3: "I want to know the specific constraints that contradict each other."

A: There can be many ways to answer this question, some of them potentially harder than solving the underlying LP would be (if it were feasible). One useful appoach is to apply auxiliary algorithms that look for small groups of constraints that can be considered to "cause" the infeasibility of the LP. Several codes include methods for finding an "irreducible infeasible subset" (IIS) of constraints that has no feasible solution, but that becomes feasible if any one constraint is removed. John Chinneck has developed MINOS(IIS), an extended version of the MINOS package that finds an IIS when the constraints have no feasible solution; a demonstration copy is available for downloading. There are also IIS finders in CPLEX, LINDO, OSL, and Xpress-MP, as well as Premium Solver Platform for Excel. Methods also exist for finding an "IIS cover" that has at least one constraint in every IIS. A minimal IIS cover is the smallest subset of constraints whose removal makes the linear program feasible. Further

Page 21: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

details and references for a variety of IIS topics are available in papers by John Chinneck. The software system ANALYZE carries out various other analyses to detect structures typically associated with infeasibility. (A bibliography on optimization modeling systems collected by Harvey Greenberg of the University of Colorado at Denver contains cross-references to over 100 papers on the subject of model analysis.)

Q6.4: "I just want to know whether or not a feasible solution exists."

A: From the standpoint of computational complexity, finding out if an LP model has a feasible solution is essentially as hard as actually finding the optimal LP solution, within a factor of 2 on average, in terms of effort in the Simplex Method; plug your problem into a normal LP solver with any objective function you like, such as c=0. For MIP models, it's also difficult - if there exists no feasible solution, then you must go through the entire Branch and Bound procedure (or whatever algorithm you use) to prove this. There are no shortcuts in general, unless you know something useful about your model's structure (e.g., if you are solving some form of a transportation problem, you may be able to assure feasibility by checking that the sources add up to at least as great a number as the sum of the destinations).

Q6.5: "I have an LP, except it's got several objective functions."

A: If you have several objectives, then you may find that they cannot all be optimized by any one solution. Instead, you will need to look for a solution or solutions that achieve an acceptable tradeoff between objectives. Deciding what tradeoffs are "acceptable" is a topic of investigation in its own right. You may want to consult MCDM WorldScan, the newsletter of the International Society on Multiple Criteria Decision Making. There are a few free software packages specifically for multiple objective linear programming, including:

ADBASE computes all efficient (i.e., nondominated) extreme points of a multiple objective linear program. It is available without charge for research and instructional purposes. If someone has a genuine need for such a code, they should send a request to: Ralph E. Steuer, Faculty of Management Science, Brooks Hall, University of Georgia, Athens, GA 30602-6255.

PROTASS , developed by Rafal Cytrycki ([email protected]) and Andrzej Dzik ([email protected]), provides two methods for multicriteria optimization: STEM and HSJ.

NIMBUS is an interactive multiobjective optimization system that has a Web interface.

Other approaches that have worked are: Goal Programming (treat the objectives as constraints with costed slacks), or,

almost equivalently, form a composite function from the given objective functions; Pareto preference analysis (essentially brute force examination of all vertices);

Page 22: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Put your objective functions in priority order, optimize on one objective, then change it to a constraint fixed at the optimal value (perhaps subject to a small tolerance), and repeat with the next function.

There is a section on this whole topic in [Nemhauser]. [Schrage] has a chapter devoted to the subject. [Hwang] has also been recommended by a reader on Usenet. As a final piece of advice, if you can cast your model in terms of physical realities, or dollars and cents, sometimes the multiple objectives disappear! 8v)

Q6.6: "I have an LP that has large almost-independent matrix blocks that are linked by a few constraints. Can I take advantage of this?"

A: Possibly you could solve your LP faster by using some form of decomposition scheme; see for example section 6.2 in [Nemhauser] or chapter 26 of [Chvatal]. The commercial code OSL has features to assist in decomposing so-called Dantzig-Wolfe and staircase structures. With any other code, you'll have to create your own decomposition framework and then call an LP solver to handle the subproblems. The folklore is that generally decomposition schemes take a long time to converge, so that they're slower than just solving the model as a whole -- although research continues. For now my advice, unless you are using OSL or your model is so huge that you can't buy enough memory to hold it, is to not bother decomposing it. It's probably more cost effective to upgrade your solver than to invest more time in programming (a good piece of advice in many situations).

Q6.7: "I am looking for an algorithm to compute the convex hull of a finite number of points in n-dimensional space."

A: A good place to start is the directory of Codes for Polytope and Polyhedron Algorithms at the Konrad-Zuse-Zentrum für Informationstechnik Berlin. It includes codes for convex hull computation, as well as for the opposite problem of generating all extreme points and extreme rays of a general convex polyhedron given by a system of linear inequalities. Here are further comments on some of these codes: Ken Clarkson has written Hull, an ANSI C program that computes the convex hull of a point set in general dimension. The input is a list of points, and the output is a list of facets of the convex hull of the points, each facet presented as a list of its vertices. Qhull computes convex hulls as well as Delaunay triangulations, halfspace intersections about a point, Voronoi diagrams, and related objects. It uses the "Beneath Beyond" method, described in [Edelsbrunner]. Komei Fukuda's cdd solves both the convex hull and vertex enumeration problems, using the Double Description Method of Motzkin et al. There are also a C++ version (cdd+) and a C-library version (cddlib) that can

Page 23: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

be compiled to run with both floating-point and GMP exact rational arithmetics. David Avis's lrslib is a self-contained ANSI C implementation of the reverse search algorithm for vertex enumeration/convex hull problems, implemented as a callable library with a choice of three arithmetic packages. VE041, another implementation of this approach by Fukuda and Mizukoshi, is available in a Mathematica implementation. The Center for the Computation and Visualization of Geometric Structures at the University of Minnesota maintains a list of its downloadable software, and hosts a directory of computational geometry software compiled by Nina Amenta. Other algorithms for such problems are described in [Swart], [Seidel], and [Avis]. Such topics are said to be discussed in [Schrijver] (page 224), [Chvatal] (chapter 18), [Balinski], and [Mattheis] as well.

Q6.8: "Are there any parallel LP codes?"

A: Options for parallel computation of various kinds have become a common feature of software for linear and mixed-integer programming. Here are some examples:

IBM's OSL Parallel Extensions implement interior-point methods for linear programming and branch-and-bound methods for mixed-integer programming on homogeneous networks under Windows NT, AIX 4.x (on IBM RS/6000 and SP systems), HP-UX, Sun Solaris, and SGI IRIX, using the Parallel Virtual Machine (PVM) System version 3.4 for communication.

The CPLEX Parallel Solvers include simplex, barrier, and branch-and-bound solvers for linear and mixed-integer programming on a number of multiple-CPU systems.

Dash's Xpress-Parallel includes a branch-and-bound mixed-integer programming code designed to exploit both multi-processor computers and networks of workstations.

OOPS is an object-oriented parallel implementation of the interior point algorithm, developed by Jacek Gondzio ([email protected]), Andreas Grothey and Robert Sarkissian. The code can exploit any special structure of the problem. It runs on all parallel computing platforms that support MPI.

Two parallel branch-cut-price frameworks are available to those who want to program specialized solvers for hard combinatorial problems that can be approached via integer programming:

Symphony requires the user to supply model-specific preprocessing and separation functions, while other components including search tree, cut pool, and communication management are handled internally. Source code is included for basic applications to traveling salesman and vehicle routing problems. The distributed version runs in any environment supported by the PVM message passing protocol, and can also be compiled for shared-memory architectures using any OpenMP compliant compiler.

BCP is a component of the COIN open source initiative.

Page 24: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Performance evaluations of parallel solvers must be interpreted with care. One common measurement is the "speedup" defined as the time for solution using a single processor divided by the time using multiple processors. A speedup close to the number of processors is ideal in some sense, but it is only a relative measure. The greatest speedups tend to be achieved by the least efficient codes, and especially by those that fail to take advantage of the sparsity (predominance of zero coefficients) in the constraints. For problems having thousands of constraints, a sparse single-processor code will tend to be faster than a non-sparse multiprocessor code running on current-day hardware.

Q6.9: "What software is there for Network models?"

A: In the context of linear programming, the term "network" is most often associated with the minimum-cost network flow problem. A network for this problem is viewed as a collection of nodes (or circles or locations) and arcs (or lines or routes) connecting selected pairs of nodes. Arcs carry a physical or conceptual flow of some kind, and may be directed (one-way) or undirected (two-way). Some nodes may be sources (permitting flow to enter the network) or sinks (permitting flow to leave). The network linear programming problem is to minimize the (linear) total cost of flows along all arcs of a network, subject to conservation of flow at each node, and upper and/or lower bounds on the flow along each arc. This is a special case of the general linear programming problem. The transportation problem is an even more special case in which the network is bipartite: all arcs run from nodes in one subset to the nodes in a disjoint subset. A variety of other well-known network problems, including shortest path problems, maximum flow problems, and certain assignment problems, can also be modeled and solved as network linear programs. Details are presented in many books on linear programming and operations research. Network linear programs can be solved 10 to 100 times faster than general linear programs of the same size, by use of specialized optimization algorithms. Some commercial LP solvers include a version of the network simplex method for this purpose. That method has the nice property that, if it is given integer flow data, it will return optimal flows that are integral. Integer network LPs can thus be solved efficiently without resort to complex integer programming software. Unfortunately, many different network problems of practical interest do not have a formulation as a network LP. These include network LPs with additional linear "side constraints" (such as multicommodity flow problems) as well as problems of network routing and design that have completely different kinds of constraints. In principle, nearly all of these network problems can be modeled as integer programs. Some "easy" cases can be solved much more efficiently by specialized network algorithms, however, while other "hard" ones are so difficult that they require specialized methods that may or may not involve some integer programming. Contrary to many people's intuition, the statement of a

Page 25: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

hard problem may be only marginally more complicated than the statement of some easy problem. A canonical example of a hard network problem is the "traveling salesman" problem of finding a shortest tour through a network that visits each node once. A canonical easy problem not obviously equivalent to a linear program is the "minimum spanning tree" problem to find a least-cost collection of arcs that connect all the nodes. But if instead you want to connect only some given subset of nodes (the "Steiner tree" problem) then you are faced with a hard problem. These and many other network problems are described in some of the references below. Software for network optimization is thus in a much more fragmented state than is general-purpose software for linear programming. The following are some of the implementations that are available for downloading. Most are freely available for many purposes, but check their web pages or "readme" files for details.

ASSCT , an implementation of the Hungarian Method for the Assignment problem (#548 from Collected Algorithms of the ACM).

GIDEN , an interactive graphical environment for a variety of network problems and algorithms, available as a Java application or as an applet that can be executed through any Java-enabled Web browser. Further information is available by writing to [email protected].

MCF , a C implementation of primal and dual network simplex methods (from Andreas Loebel, [email protected]), supported for Unix/Linux environments and Windows 95/98/NT (MS Visual C++ 6.0), and available for academic use free of charge.

Netflo , the Fortran network simplex code from [Kennington], and several codes for maximum matching and maximum flow problems (from DIMACS, [email protected])

PPRN , for single or multicommodity network flow problems having a linear or nonlinear objective function, optionally with linear side constraints, by Jordi Castro ([email protected])

RELAX-IV for minimum-cost network flows (by Dimitri Bertsekas, [email protected] and Paul Tseng, [email protected]); also a C++ version of the RELAX-IV algorithm (at the Department of Computer Science, University of Pisa, [email protected])

The following indexes may also be useful: Network optimization codes in Fortran 77 and in C, compiled by Ernesto Martins

([email protected]) The network optimization library, including codes for assignment, shortest path,

minimum-cost flow, and maximum flow/minimum cut, by Andrew Goldberg ([email protected]).

Optimization routines for networks and graphs in the listing of public-domain optimization codes maintained by Jiefeng Xu ([email protected]).

Network optimization listings from the NEOS Guide.

Page 26: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Fortran code for the Assignment Problem and others can also be copied from[Burkard] and from [Martello].

Q6.10: "What software is there for the Traveling Salesman Problem?"

A: This famously hard problem -- known more briefly as the TSP -- is to find the shortest tour that visits a given collection of cities or locations of some kind. It is a hard (NP-complete) problem just like integer programming, but the obvious integer programming formulations of it are not especially useful in getting good solutions within a reasonable amount of time. The TSP has attracted many of the best minds in the optimization field, and serves as a kind of test-bed for methods subsequently applied to more complex and practical problems. Methods have been explored both to give proved optimal solutions, and to give approximate but "good" solutions, with a number of codes being developed as a result:

Concorde has solved a number of the largest TSPs for which proved optimal solutions are known. It employs a polyhedral approach, which is to say that it relies on a range of exceedingly sophisticated linear programming constraints, in a framework that resembles integer programming branch-and-bound methods. The constraints are selectively generated as the solution process proceeds. The full C code is available without cost for research purposes.

Public domain code for the Asymmetric TSP (with travel between two cities significantly cheaper in one of the two directions) is available in TOMS routine #750, documented in [Carpaneto].

Code for a solver can be obtained via instructions in [Volgenant]. Chad Hurwitz ([email protected]), offers a code called tsp_solve for heuristic and

optimal solution, to those who email him. [Syslo] contains some algorithms and Pascal code. Numerical Recipes [Press] contains code that uses simulated annealing. Stephan Mertens's TSP Algorithms in Action uses Java applets to illustrate some

simple heursitics and compare them to optimal solutions, on 10-25 node problems. Onno Waalewijn has constructed Java TSP applets exhibiting the behavior of

different methods for heuristic and exhaustive search on various test problems.

For a bibliography, check the Integer Programming section of [Nemhauser], particularly the references with the names Groetschel and/or Padberg in them. Other good references are [Lawler] and [Reinelt]. Sophisticated and widely used heuristics for getting a "good" solution are described in the article by Lin and Kernighan in Operations Research 21 (1973) 498-516. For practical purposes, the traveling salesman problem is only the simplest case of what are generally known as vehicle-routing problems. Thus commercial software packages for vehicle routing -- or more generally for "supply chain management" or "logistics" -- may have TSP routines buried somewhere within them. OR/MS Today published a detailed vehicle routing software survey in their August 2000 issue.

Page 27: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Q6.11: "What software is there for cutting or packing problems?"

A: The cutting (or "cutting stock") problem involves cutting large pieces of something into specified numbers of smaller pieces in the most economical way. In the one-dimensional version, cutting only reduces a single measurement, usually referred to as the length or width of the pieces; examples include cutting wide rolls of paper or sheet steel into specified numbers of smaller widths (also called the "roll trim" problem), and cutting long pieces of wood or pipe into specified specified numbers of shorter pieces. In the two-dimensional version, both a length and width may be specified for both the large pieces you start with and the smaller ones to be cut, or the shapes to be cut may be more general. The material may again be wood or metal, or paper or fabric, or even cookie dough. The packing problem can be regarded as a kind of cutting in reverse, where the goal is to fill large spaces with specified smaller pieces in the most economical (or profitable) way. As with cutting, there are one-dimensional problems (also called knapsack problems) and two-dimensional problems, but there are also many three-dimensional cases such as arise in filling trucks or shipping containers. The size measure is not always length or width; it may be weight, for example. Except for some very special cases, cutting and packing problems are hard (NP-complete) like integer programming or the TSP. The simpler one-dimensional instances are often not hard to solve in practice, however:

For knapsack problems, a good MIP solver applied to a straightforward integer-programming formulation can often be used. Specialized algorithms are said to be available in [Syslo] and [Martello].

For roll-trim problems, it is not hard to implement the Gilmore-Gomory procedure using a linear programming subroutine library or modeling language. The NEOS Guide's cutting stock case study introduces this procedure and lets you try small examples, while a good detailed presentation can be found in chapter 13 of [Chvatal]). This approach is based on a linear programming problem that decides how many rolls should be cut in each of several patterns, which means that fractional numbers of patterns may show up in the results. As long as most widths are needed in substantial quantities -- say, 10 or more -- an acceptable result can usually be found by rounding the fractions, or by solving one integer program at the end of the procedure.

One-dimensional problems that are subject to additional special requirements can be much harder, though; see [Haessler] for some examples in the roll-trim context. Higher-dimensional problems tend to be much harder still. As a result, there are a great variety of approximating heuristics and exact methods for solving various classes of cutting and packing problems. To find a useful solver for your particular problem may take some searching, and the best ones may be costly. There has been a great deal written on cutting and packing topics, but it tends to be scattered. You might want to start by looking at the web page of the Special Interest Group on Cutting and Packing and the

Page 28: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

"application-oriented research bibliography" in [Sweeney]. A search through some of the INFORMS databases will also turn up a lot of references. In fact, even an ordinary web search engine can find you a lot on this topic; try searching on "cutting stock". Among the commercial systems are CutRite, act/cut and act/square of Alma, Cutting Optimizer of Ardis, Cutter of Escape, Cube-IQ and PlanIQ of MagicLogic, PLUS1D and PLUS2D of Nirvana Technologies, Cut Planner of Pattern Systems, and SmartTRIM of Strategic Systems. Particular application areas, from paper to carpeting, have also given rise to their own specialized cutting-stock tools, which can often be found by a web search on the area of interest.

Q6.12: "What software is there for linear programming under uncertainty?"

A: See the Stochastic Programming Community Home Page for listings of software as well as conferences, tutorial materials, and publications. Instructions for joining a stochastic programming mailing list can also be found at this site. [Thanks to Derek Holmes for the following text.] Your success solving a stochastic program depends greatly on the characteristics of your problem. The two broad classes of stochastic programming problems are recourse problems and chance-constrained (or probabilistically constrained) problems. Recourse Problems are staged problems wherein one alteranates decisions with realizations of stochastic data. The objective is to minimize total expected costs of all decisions. The main sources of code (not necessarily public domain) depend on how the data is distributed and how many stages (decision points) are in the problem.

For discretely distributed multistage problems, a good package called MSLiP has been developed by Gus Gassmann ([email protected]). Also, for not huge discretely distributed problems, a deterministic equivalent can be formed which can be solved with a standard solver.

The most recent program for continuously distributed data is BRAIN, by K. Frauendorfer ([email protected]), written up in detail in the author's monograph ``Stochastic Two-Stage Programming'', Lecture Notes in Economics & Math. Systems #392 (Springer-Verlag).

IBM's Stochastic Solutions and Optimization Library Stochastic Extensions build on the OSL library to solve multistage stochastic programs.

POSTS is a small test set of recourse problems designed to highlight different qualities; it is meant as a common test bed for reporting the computational characteristics of state-of-the-art SLP algorithms and their implementations. Some test problems from a financial application have been posted by the Centre for Financial Research.

Chance-Constrained Programming (CCP) problems are not usually staged, and have a constraint of the form Pr( Ax <= b ) >= alpha. The

Page 29: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

solvability of CCP problems depends on the distribution of the data (A &/v b). I don't know of any public domain codes for CCP probs., but you can get an idea of how to approach the problem by reading Chapter 5 by Prof. A. Prekopa ([email protected]) Y. Ermoliev, and R. J-B. Wets, eds., Numerical Techniques for Stochastic Optimization (Series in Comp. Math. 10, Springer-Verlag, 1988). Both Springer Verlag texts mentioned above are good introductory references to Stochastic Programming. The Stochastic Programming E-Print Series collects many recent papers in the field.

Q6.13: "I need to do post-optimal analysis."

A: Many commercial LP codes have features to do this. Also called Ranging or Sensitivity Analysis, it gives information about how the coefficients in the problem could change without affecting the nature of the solution. Most LP textbooks, such as [Nemhauser], describe this. Unfortunately, all this theory applies only to LP. For a MIP model with both integer and continuous variables, you could get a limited amount of information by fixing the integer variables at their optimal values, re-solving the model as an LP, and doing standard post-optimal analyses on the remaining continuous variables; but this tells you nothing about the integer variables, which presumably are the ones of interest. Another MIP approach would be to choose the coefficients of your model that are of the most interest, and generate "scenarios" using values within a stated range created by a random number generator. Perhaps five or ten scenarios would be sufficient; you would solve each of them, and by some means compare, contrast, or average the answers that are obtained. Noting patterns in the solutions, for instance, may give you an idea of what solutions might be most stable. A third approach would be to consider a goal-programming formulation; perhaps your desire to see post-optimal analysis is an indication that some important aspect is missing from your model.

Q6.14: "Do LP codes require a starting vertex?"

A: No. Any reasonable simplex-based LP code can construct a starting vertex (or "basic solution") for you, given the constraints and the objective function. Most codes go through a so-called two-phase procedure, wherein they first look for a feasible solution, and then work on getting an optimal solution. (The two phases are often grandly titled phase I and phase II.) The first phase, if properly written, can begin at any infeasible basic solution, and commercial codes typically have a so-called crash routine to pick a reasonable start. It is generally not worth going to a lot of trouble to look for a better starting basic solution, unless you have already solved a similar problem. The optimal basis (the list of basic variables) from a similar problem often does help the simplex solver to get good start that substantially reduces the number of iterations to optimality. Commercial codes generally provide for saving an optimal basis and reusing it in this way, but free codes may not.

Page 30: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Interior-point methods for LP have entirely different requirements for a good starting point. Any reasonable interior-point-based LP code has its own routines for picking a starting point that is "well-centered" away from the constraints, in an appropriate sense. There is not much advantage to supplying your own starting point of any kind -- at least, not at the current state of the art -- and some codes do not even provide an option for giving a starting point.

Q6.15: "How can I combat cycling in the Simplex algorithm?"

A: Cycling is the condition that occurs when the Simplex method gets "stuck" and finds itself repeating the same vertices over and over. While this specific behavior is rather rare in practice, it is quite common for the algorithm to reach a point where it temporarily stops making forward progress in terms of improvement in the objective function; this is termed "stalling", or more loosely known as "degeneracy" since it is caused by one or more basic variables taking on the value of a lower or upper bound. In most cases, the algorithm will work through this nest of coincident vertices, then resume making tangible progress. However, in extreme cases the degeneracy is so bad that to all intents and purposes it can be considered cycling. The simplest answer to the problem of degeneracy/cycling is often to "get a better optimizer", i.e. one with stronger pricing algorithms, and a better selection of features. However, obviously that is not always an option (money!), and even the best LP codes can run into degeneracy on certain models. Besides, they say it's a poor workman who blames his tools. So, when one cannot change the optimizer, it's expedient to change the model. Not drastically, of course, but a little "noise" can usually help to break the ties that occur during the Simplex method. A procedure that can work nicely is to add, to the values in the RHS, random values roughly six orders of magnitude smaller. Depending on your model's formulation, such a perturbation may not even seriously affect the quality of the solution values. However, if you want to switch back to the original formulation, the final solution basis for the perturbed model should be a useful starting point for a "cleanup" optimization phase. (Depending on the code you are using, this may take some ingenuity to do, however.) Another helpful tactic: if your optimization code has more than one solution algorithm, you can alternate among them. When one algorithm gets stuck, begin again with another algorithm, using the most recent basis as a starting point. For instance, alternating between a primal and a dual method can move the solution away from a nasty point of degeneracy. Using partial pricing can be a useful tactic against true cycling, as it tends to reorder the columns. And of course Interior Point algorithms are much less affected by (though not totally immune to) degeneracy. Unfortunately, the optimizers richest in alternate algorithms

Page 31: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

and features also tend to be least prone to problems with degeneracy in the first place.

Q7. "What references are there in this field?"

A: What follows is an idiosyncratic list, based on my own preferences, various people's recommendations on the net, and recent announcements of new publications. It is divided into the following categories:

o General reference o Textbooks o Presentations of LP modeling systems o Books containing source code o Additional books o Periodicals o Articles of interest o See also the listings of online sources of information following.

I have not reviewed everything listed.

General reference

Nemhauser, Rinnooy Kan, & Todd, eds, Optimization: Handbooks in Operations Research and Management Science, Volume 1, North-Holland, 1989. (Very broad-reaching, with large bibliography; a good reference, and worth checking first. Expensive, though, and tough reading for beginners.)

Textbooks

Regarding the common question of the choice of textbook for a college LP course, it's difficult to give a blanket answer because of the variety of topics that can be emphasized: brief overview of algorithms, deeper study of algorithms, theorems and proofs, complexity theory, efficient linear algebra, modeling techniques, solution analysis, and so on. A small and unscientific poll of ORCS-L mailing list readers in 1993 uncovered a consensus that [Chvatal] was in most ways pretty good, at least for an algorithmically oriented class; of course, some new candidate texts have been published in the meantime. For a class in modeling, a book about a commercial code would be useful (LINDO, AMPL, GAMS were suggested), especially if the students are going to use such a code; and many are fond of [Williams], which presents a considerable variety of modeling examples.

Bazaraa, Jarvis and Sherali. Linear Programming and Network Flows. Grad level.

Page 32: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Bertsimas, Dimitris and Tsitsiklis, John, Introduction to Linear Optimization. Athena Scientific, 1997 (ISBN 1-886529-19-1). Graduate-level text on linear programming, network flows, and discrete optimization.

Chvatal, Linear Programming, Freeman, 1983. Undergrad or grad. Cook, W.J., Cunningham, W.H., Pulleyblank, W.R. and Schrijver, A.

Combinatorial Optimization, Wiley Interscience, 1997. Daellenbach, Hans G., A User's Guide to Linear Programming. Good for engineers.

Currently out of print. Fang, S.-C. & Puthenpura, S., Linear Optimization and Extensions: Theory and

Algorithms. Prentice Hall, 1993. Dantzig, George B., Linear Programming and Extensions, Princeton University

Press, 1963.. The most widely cited early textbook in the field. Dantzig, George B. and Thapa, Mukund N., Linear Programming 1: Introduction,

Springer Verlag, 1997.. Gass, Saul I., Linear Programming: Methods and Applications, 5th edition.

International Thomson Publishing, 1985.. The author received the 1997 INFORMS Expository Writing Award.

Ignizio, J.P. & Cavalier, T.M., Linear Programming, Prentice Hall, 1994. Covers usual LP topics, plus interior point, multi-objective and heuristic techniques.

Luenberger, Introduction to Linear and Nonlinear Programming, Addison Wesley, 1984. Updated version of an old standby. The author received the 1999 INFORMS Expository Writing Award.

Murtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print.

Murty, K., Linear and Combinatorial Programming. Nash, S., and Sofer, A., Linear and Nonlinear Programming, McGraw-Hill, 1996. Nazareth, J.L., Computer Solution of Linear Programs, Oxford University Press,

New York and Oxford, 1987. Nemhauser, G.L. and Wolsey, L.A., Integer and Combinatorial Optimization, Wiley

Interscience, 1988. An advanced text that covers many theortical and computational topics.

Nering, E.D. & Tucker, A.W., Linear Programs and Related Problems, Academic Press, 1993.

Roos, Terlaky and Vial, Theory and Algorithms for Linear Optimization: An Interior Point Approach. John Wiley, Chichester, 1997

Saigal, R., Linear Programming: A Modern Integrated Analysis, Kluwer Academic Publishers, 1995.

Schrijver, A., Theory of Linear and Integer Programming, Wiley, 1986. Advanced. Taha, H., Operations Research: An Introduction, 1987. Thie, P.R., An Introduction to Linear Programming and Game Theory, Wiley, 1988. Vanderbei, Robert J., Linear Programming: Foundations and Extensions. Kluwer

Academic Publishers, 1996. Balanced coverage of simplex and interior-point methods. Source code available on-line for all algorithms presented.

Williams, H.P., Model Building in Mathematical Programming, Wiley 1993, 3rd edition. Little on algorithms, but excellent for learning what makes a good model.

Wright, Stephen J., Primal-Dual Interior-Point Methods. SIAM Publications, 1997. Covers theoretical, practical and computational aspects of the most important and

Page 33: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

useful class of interior-point algorithms. The web page for this book contains current information on interior-point codes for linear programming, including links to their websites.

Ye, Yinyu, Interior Point Algorithms: Theory and Analysis. Wiley, 1997.

Presentations of LP modeling systems (usable as texts for some classes)

Bisschop and Roelofs, AIMMS Optimization Modeling and AIMMS User's Guide, Paragon Decision Technology, 1999.

Brooke, Kendrick & Meeraus, GAMS: A Users' Guide, The Scientific Press/Duxbury Press, 1988.

Fourer, Gay & Kernighan, AMPL: A Modeling Language for Mathematical Programming, Duxbury Press, 1992. Comes with Windows "student" version including MINOS 5.5 and CPLEX 6.5.

Greenberg, H.J., Modeling by Object-Driven Linear Elemental Relations: A User's Guide for MODLER, Kluwer Academic Publishers, 1993.

Schrage, L., Optimization Modeling with LINDO, 5th edition, Duxbury Press, 1997.

Books containing source code

Best and Ritter, Linear Programming: active set analysis and computer programs, Prentice-Hall, 1985.

Bertsekas, D.P., Linear Network Optimization: Algorithms and Codes, MIT Press, 1991.

Bunday and Garside, Linear Programming in Pascal, Edward Arnold Publishers, 1987.

Bunday, Linear Programming in Basic (presumably the same publisher). Burkard and Derigs, Springer Verlag Lecture Notes in Math Systems #184 (the

Assignment Problem and others). Kennington & Helgason, Algorithms for Network Programming, Wiley, 1980. (A

special case of LP; contains Fortran source code.) Lau, H.T., A Numerical Library in C for Scientists and Engineers, 1994, CRC

Press. (Contains a section on optimization.) Martello and Toth, Knapsack Problems: Algorithms and Computer

Implementations, Wiley, 1990. (Contains Fortran code, comes with a disk - also covers Assignment Problem.)

Press, Flannery, Teukolsky & Vetterling, Numerical Recipes, Cambridge, 1986. (Comment: use their LP code with care.)

Syslo, Deo & Kowalik, Discrete Optimization Algorithms with Pascal Programs, Prentice-Hall (1983). (Contains code for 28 algorithms such as Revised Simplex, MIP, networks.)

Additional books

Page 34: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Ahuja, Magnanti and Orlin, Network Flows, Prentice Hall, 1993. Beasley, ed., Advances in Linear and Integer Programming. Oxford University

Press, 1996. Each chapter is a self-contained essay on one aspect of the subject. Bertsekas, Network Optimization: Continuous and Discrete Models, Athena

Scientific, 1998. Chandru and Hooker, Optimization Methods for Logical Inference, Wiley-

Interscience, 1999. Edelsbrunner, Algorithms in Combinatorial Geometry, Springer Verlag, 1987. Forsythe, Malcolm & Moler, Computer Methods for Mathematical Computations,

Prentice-Hall. Gill, Murray and Wright, Numerical Linear Algebra and Optimization, Addison-

Wesley, 1991. Greenberg, A Computer-Assisted Analysis System for Mathematical Programming

Models and Solutions: A User's Guide for ANALYZE, Kluwer Academic Publishers, 1993.

Hooker, Logic-Based Methods for Optimization, Wiley-Interscience, 2000. Hwang and Yoon, Multiple Attribute Decision Making: An Introduction, Sage,

1995. Lawler, Lenstra, Rinnooy Kan and Shmoys, The Traveling Salesman Problem: A

Guided Tour of Combinatorial Optimization, Wiley, 1985. Moré and Wright, Optimization Software Guide, SIAM Publications, 1993. See also

the NEOS Guide to Optimization Software. Murty, Network Programming, Prentice Hall, 1992. Papadimitriou and Steiglitz, Combinatorial Optimization: Algorithms and

Complexity, Dover, 1998. Also contains a discussion of complexity of simplex method.

Reeves, ed., Modern Heuristic Techniques for Combinatorial Problems, Halsted Press (Wiley), 1993. Contains chapters on tabu search, simulated annealing, genetic algorithms, neural nets, and Lagrangian relaxation.

Reinelt, The Traveling Salesman: Computational Solutions for TSP Applications, Springer-Verlag, 1994.

Rockafellar, Network Flows and Monotropic Optimization, Athena Scientific, 1984, reprinted 1998.

Periodicals

Publications of INFORMS, the principal operations research and management science society in the United States:

o OR/MS Today , the bimonthly newsletter of INFORMS, includes case studies, software reviews, and software surveys relevant to linear programming. It also has the most extensive collection of advertisements for commercial linear programming (and other optimization) software packages.

o Interfaces frequently publishes interesting accounts of applications that make use of linear programming.

o INFORMS Journal on Computing includes some articles on the computational aspects of linear programming algorithms.

Page 35: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

o Interactive Transactions of OR/MS contains a number of useful surveys and bibliographies relevant to linear programming.

Publications of the Mathematical Programming Society: o Mathematical Programming contains technical articles on theory and

computation. o Optima , the Society's newsletter, incorporates survey articles and book

reviews. Publications of SIAM, the principal applied mathematics society in the United

States: o SIAM Journal on Optimization contains technical articles, mainly of a

mathematical nature. o SIAG/OPT Newsletter, from the SIAM Activity Group on Optimization.

Publisher-sponsored technical journals focusing on optimization topics: o Computational Optimization and Applications (and allied Software Forum) o Constraints o Journal of Optimization Theory and Applications o Optimization Methods and Software

Articles of interest

Avis & Fukuda, "A Pivoting Algorithm for Convex Hulls and Vertex Enumeration of Arrangements and Polyhedra", Discrete and Computational Geometry, 8 (1992), 295--313.

Balas, E. and Martin, C., "Pivot And Complement: A Heuristic For 0-1 Programming Problems", Management Science, 1980, Vol 26, pp 86-96.

Balinski, M.L., "An Algorithm for Finding all Vertices of Convex Polyhedral Sets", SIAM J. 9, 1, 1961.

Carpaneto, Dell'amico & Toth, "A Branch-and-bound Algorithm for Large Scale Asymmetric Travelling Salesman Problems," ACM Transactions on Mathematical Software 21 (1995) 410-415.

Haessler, "Selection and Design of Heuristic Procedures for Solving Roll Trim Problems," Management Science 12 (1988) 1460-1471.

Lustig, Marsten and Shanno, "Interior Point Methods for Linear Programming: Computational State of the Art", ORSA Journal on Computing, Vol. 6, No. 1, Winter 1994, pp. 1-14. Followed by commentary articles, and a rejoinder by the authors.

Marsten, Subramanian, Shanno, Saltzman and Lustig, "Interior point methods for linear programming: Just call Newton, Lagrange, and Fiacco and McCormick!" Interfaces, Vol. 20, No. 4 (1990) pp. 105-116.

Mattheis and Rubin, "A Survey and Comparison of Methods for Finding All Vertices of Convex Polyhedral Sets", Mathematics of Operations Research, vol. 5 no. 2 1980, pp. 167-185.

Seidel, "Constructing Higher-Dimensional Convex Hulls at Logarithmic Cost per Face", 1986, 18th ACM STOC, 404--413.

Smale, Stephen, "On the Average Number of Steps in the Simplex Method of Linear Programming", Math Programming 27 (1983), 241-262.

Page 36: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

Swart, "Finding the Convex Hull Facet by Facet", Journal of Algorithms, 6 (1985), 17--48.

P.E. Sweeney and E.R. Paternoster, "Cutting and Packing Problems: A Categorized, Application-Oriented Research Bibliography," Journal of the Operational Research Society 43 (1992) 691-706.

Volgenant, A., Symmetric TSPs, European Journal of Operations Research, 49 (1990) 153-154.

A special issue of Optimization Methods and Software (volumes 11-12, December 1999) is devoted to interior-point methods and includes a supplementary CD with related sofware.

Q8. "What's available online in this field?"

A: Though traditional publications may remain the best places to learn about optimization theory and algorithms, the Web is the place to go for optimization software. In addition to numerous sources of optimization codes and optimization-related home pages, the Web is increasingly a source of optimization services that let you try out solvers without having to install them on your own equipment.

On-line sources of optimization services

The following websites offer, in some sense, to run your linear or integer programming problem and return a result. Check their home pages for details, which vary considerably. (See also the analogous listing in the NLP FAQ.)

AMPL . A convenient interface supports experimentation with the AMPL modeling language on test problems up to 300 variables and 300 constraints + objectives. Users can request any example from the AMPL book, or provide their own model and data files. There is a choice among 8 solvers for linear and integer programming (as well as nonlinear optimization and complementarity problems).

DecisionNet . Provides access to "a distributed collection of decision technologies," including linear programming, "that are made available for execution over the World Wide Web. These technologies are developed and maintained locally by their providers. DecisionNet contains technology metainformation necessary to guide consumers in search, selection, and execution of these technologies." Facilities for submitting problems in popular modeling language formats are currently being tested.

GIDEN . An interactive graphical environment for a variety of network optimization problems and algorithms. It is written in Java, so you can try it out through any Java-enabled Web browser.

Kestrel . This interface permits varied linear, integer, and nonlinear optimization problems to be sent directly from the AMPL and GAMS modeling systems to the NEOS Server. Results are sent back to AMPL or GAMS, where they may be displayed and analyzed just as if the solving had been done locally.

Page 37: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

LPL . Small problems modeled in the LPL language can be submitted by typing or pasting into a web form. Results are returned on a subsequent web page.

MILP by Dmitry V. Golovashkin. Small-scale mixed-integer programs in a simple algebraic format are solved through a web form interface.

Network-Enabled Optimization System (NEOS) Server . Offers access to several dozen solvers for linear and integer programming as well as various nonlinear optimization problems.

o Linear and integer programs are accepted in MPS format or in modeling languages such as AMPL and GAMS.

o Submissions may be made by sending e-mail, by submitting names of local files through a Web form, or via a socket-based submission tool available in Java and Unix tcl/tk versions.

Numerische Mathematik Interaktiv includes teaching tools for various linear and nonlinear optimization methods (in German).

Optimisation Service Provision (OSP) is a prototype for a web-based service that offers varied modeling tools, solvers, and data management facilities. It presents a sophisticated interface and currently offers the only web access to the CPLEX and OSL solvers.

RIOT . The Remote Interactive Optimization Testbed at Berkeley's Industrial Engineering and Operations Research Department, offering online demonstrations of numerous optimization tools and applications.

A more extensive survey appears in Optimization as an Internet Resource by R. Fourer and J.-P. Goux, in the INFORMS journal Interfaces, volume 31, number 2 (2001).

Online sources of optimization software

The Netlib Repository contains a huge collection of mathematical software, papers, and databases, maintained at the University of Tennessee, Knoxville and Oak Ridge National Laboratory. There are also mirror sites in the United States, Norway, the United Kingdom, and other locations. Once you know what you want from Netlib, you may may prefer to make requests by anonymous ftp or e-mail; alternative access methods and many other relevant matters are explained in the Netlib FAQ. Many optimization packages are distributed from their own Web sites. Numerous links to these sites are provided elsewhere in this FAQ, especially under the Where is there good software? question.

Online sources of optimization information

Varied general sources on optimization include information on linear and integer programming. They include:

The Optimization Online repository of papers and reports on optimization. Contributions are accepted from anyone working in the field, and are moderated by a team of volunteer coordinators; you can subscribe to a monthly digest that summarizes reports recently received.

Page 38: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

The e-Optimization.community, a web meeting place for optimization users and developers, featuring a who's who of people in optimization as well as listings of optimization resources, applications, vendors, case studies, and news.

The NEOS Guide of the Argonne/Northwestern Optimization Technology Center. The Decision Tree for Optimization Software by Hans Mittelmann and P. Spellucci,

and the related Benchmarks for Optimization Software. Harvey Greenberg's Mathematical Programming Glossary and Bibliography for the

Development of An Intelligent Mathematical Programming System. Listings of optimization software and related information at the Center for

Advanced Modeling and Optimization.

More specialized sources include: Interior-Point Methods Online , containing most new reports in the area of interior-

point methods that have appeared since December 1994. Abstracts for all reports are available, as are links to postscript source for most reports; a mailing list carries announcements of new papers, conferences, software, and special journal issues relevant to interior-point methods.

A bibliography of books and papers on Interior-Point methods (including many older ones) compiled by Eberhard Kranich.

A resource website in semidefinite programming maintained by Christoph Helmberg, and a semidefinite programming bibliography by Henry Wolkowicz.

An extensive bibliography for stochastic programming and a specialized one for stochastic integer programming, compiled by Maarten van der Vlerk.

The following sites cover operations research and management science more generally, but have a large amount of content relevant to linear programming and other optimization problems:

INFORMS Online , the website of the Institute for Operations Research and the Management Sciences, including:

o Searchable databases of INFORMS conference presentations going back to 1990, and publications in operations research back to 1982.

o The INFORMS OR/MS Resource Collection (formerly "Michael Trick's Operations Research Page").

WORMS (World-Wide-Web for Operations Research and Management Science) at the Department of Mathematics and Statistics, University of Melbourne, Australia.  

Q9. "Who maintains this FAQ list?"

A: This list is maintained by Robert Fourer ([email protected]) and the Optimization Technology Center of Northwestern University and Argonne National Laboratory. This article is Copyright © 2000 by Robert Fourer. It may be freely redistributed in its entirety provided that this copyright notice is not removed. It may not be sold for profit or incorporated in commercial

Page 39: · Web viewMurtagh, B., Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981. Good one after you've read an introductory text. Currently out of print. Murty, K.,

documents without the written permission of the copyright holder. Permission is expressly granted for this document to be made available for file transfer from installations offering unrestricted anonymous file transfer on the Internet. The material in this document does not reflect any official position taken by any organization. While all information in this article is believed to be correct at the time of writing, it is provided "as is" with no warranty implied. If you wish to cite this FAQ formally -- this may seem strange, but it does come up -- you may use:

Robert Fourer ([email protected]), "Linear Programming Frequently Asked Questions," Optimization Technology Center of Northwestern University and Argonne National Laboratory, http://www-unix.mcs.anl.gov/otc/Guide/faq/ linear-programming-faq.html (2000).

Suggestions, corrections, topics you'd like to see covered, and additional material are all solicited. Send them to [email protected].

END linear-programming-faq