32
LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING 3. GAMES Books: Ð3Ñ Þ Ð Ñ Intro to OR F.Hillier & J. Lieberman ; Ð33Ñ Ð Ñ OR H. Taha ; Ð333Ñ Þ Ð Ñ Intro to Mathematical Prog F.Hillier & J. Lieberman ; Ð3@Ñ Þ Ð ÑÞ Intro to OR J.Eckert & M. Kupferschmid

OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

  • Upload
    buitruc

  • View
    243

  • Download
    0

Embed Size (px)

Citation preview

Page 1: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 1

OPERATIONS RESEARCH: 343

1. LINEAR PROGRAMMING

2. INTEGER PROGRAMMING

3. GAMES

Books:

Ð3Ñ Þ Ð ÑIntro to OR F.Hillier & J. Lieberman ;Ð33Ñ Ð ÑOR H. Taha ;

Ð333Ñ Þ Ð ÑIntro to Mathematical Prog F.Hillier & J. Lieberman ;Ð3@Ñ Þ Ð ÑÞIntro to OR J.Eckert & M. Kupferschmid

Page 2: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 2

LINEAR PROGRAMMING (LP)

LP is an in which the objective is a linear function and the constraints on theoptimal decision making tooldecision problem are . It is a very popular decision support tool: in a survey oflinear equalities and inequalitiesFortune 500 firms, 85% of the responding firms said that they had used LP.

Example 1: Manufacturer Produces: A (acid) and C (caustic)

Ingredients used in the production of A & C: X and Y

Each ton of A requires: 2lb of X; 1lb of Y

Each ton of C requires: 1lb of X ; 3lb of Y

Supply of X is limited to: 11lb/week

Supply of Y is limited to: 18lb/week

1 ton of A sells for: £1000

1 ton of C sells for: £1000

Manufacturer wishes to weekly value of sales of A & C. Market research indicates no moremaximizethan 4 tons of acid can be sold each week. How much A & C to produce to solve this problem.The answer is a pair of numbers:

x weekly production of A , x weekly p.of C" #Ð Ñ Ð Ñ

There are many pairs of numbers x , x : 0,0 , 1,1 , 3,5 .... Not all pairs x , x are possibleÐ Ñ Ð Ñ Ð Ñ Ð Ñ Ð Ñ" # " #

weekly productions ex. x 27, x 2 are not possible 27, 2 is not a set of production figures .Ð œ œ Ñ Ð Ð Ñ Ñ" # feasibleThe constraints on x , x are such that x , x represent a possible set of production figures:" # " #Ð Ñ

The amount each productis produced is non-negative: x 0 x 0" #   

The amount of ingredient Xrequired to produce x tons of A"

& x tons of C is 2x x .# " #As X is limited to 11lb/week: 2x x 11" # Ÿ

The amount of ingredient Y requiredcombined with the supply restriction: x 3x 18" # Ÿ

We cannot sell more than 4 tons of A/week: x 4" Ÿ

A possible set of production figures satisfies these constraints. Conversely any x , x satisfying theseÐ Ñ" #

constraints are a possible set of production figures: see FIGURE 1

THE FEASIBLE REGION is the intersection of the shaded regions & is given by see FIGURE 2 . Thefeasible region OPQRS represents all pairs x , x that satisfy the constraints. The (vertices)Ð Ñ Ð Ñ" # cornersO,P,Q,R,S have a special significance [ O=(0,0), P=(0,6), Q=(3,5), R=(4,3), S=(4,0)].

Page 3: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING
Page 4: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 3Associated with each feasible x , x ) is a sales value of £1000 x x . Since we wish to maximize thisÐ ‚ Ð Ñ" # " #

amount, our problem is:

Maximize : x x objective function" # oq

Subject to 2x x 11 constraint" # Ÿ oq

x 3x 18 constraint" # Ÿ oq

x 4 constraint" Ÿ oq x , x 0 constraint" #   oq

This is called a : A problem of optimizing (maximizing or minimizing) a linearLINEAR PROGRAM (LP)function subject to linear constraints. Linear: no powers, exponentials or product terms .Ð Ñ

PROPERTY *Ð Ñ: Observe that the set O, P, Q, R, S contains an optimal solution to our L.P. evaluateÖ ×the objective function x x at these points: 0,6,8,7,4 Q 3,5 , x 3, x 5 is the optimal" # " # Ê œ Ð Ñ œ œsolution. see FIGURE 3

Note: The feasible region i.e. area described by the polygon OPQRS lies entirely within that half ofÐ Ñthe plane for which x x 8. Since 5 3 8 no feasible point has a higher objective value than that of" # Ÿ œQ.

Property * holds if we replace x x by any linear function c x c x . e.g. to minimize 3x Ð Ñ " # " " # # "

x over points in the polyhedron OPQRS we take the smallest point of 0, 6, 4, 9, 12 and find that P : x 0,# " œx 6 is an optimal solution.# œ

The , to be described later, is an efficient method for finding an optimalSIMPLEX ALGORITHMvertex without necessarily examining all of them.

Property * does imply that points other than vertices cannot be optimal. e.g. if we want toÐ Ñ notmaximize 2x x then any point on the segment QR is optimal." #

2. STANDARD LP FORM

Any LP can be transformed into STANDARD FORM

minimise x = c x + c x + ... + c x0 1 1 2 2 n n subject to a x + a x + ... + a x = b11 1 12 2 1n n 1

a x + a x + ... + a x = b21 1 22 2 2n n 2 ã ã ã ã a x + a x + ... + a x = bm1 1 m2 2 mn n m

and x 0 x 0 ... x 01 2 n     

b , c , a : fixed real constants; x ; i=0, ..., n: real numbers, to be determined.i j ij i

We assume that b 0 (each equation may be multiplied by -1 to achieve this).i

Compact Notation

minimise x = c x0T

subject to A x = b

and x 0. 

A = b = x = c =

a a ... a b x ca a ... a b x

a a ... a b x

Ô × Ô × Ô × Ô ×Ö Ù Ö Ù Ö Ù Ö ÙÖ Ù Ö Ù Ö Ù Ö ÙÕ Ø Õ Ø Õ Ø Õ Ø

11 12 1n 1 1 1

21 22 2n 2 2

m1 m2 mn m n

ã ã ä ã ã ãc

c

2

n

ã

Page 5: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 4

c = c , ..., cT1 nc d

Example 2 Slack variables:

min x = c x + c x + ... + c x0 1 1 2 2 n n subject to a x + a x + ... + a x b11 1 12 2 1n n 1Ÿ

a x + a x + ... + a x b21 1 22 2 2n n 2Ÿ ã ã ã ã a x + a x + ... + a x bm1 1 m2 2 mn n mŸ

and x 0 x 0 ... x 01 2 n     

min x = c x + c x + ... + c x0 1 1 2 2 n n subject to a x + a x + ... + a x + = b11 1 12 2 1n n 1y1

a x + a x + ... + a x + = b21 1 22 2 2n n 2y2 ã ã ã ä ã a x + a x + ... + a x + = bm1 1 m2 2 mn n mym

and x 0 x 0 ... x 0, 1 2 n      y 0, y 0, ..., y 0.1 2 m     

Total variables: n + m. Slack variables: y , y , ..., y1 2 m

m (m + n) matrix: A I ‚ ãc d A I = b

x...y

c d Ô ×Õ Øã

Example 3 Surplus variables:

If the inequalities of Example 2 were reversed so that the typical inequality becomes

a x + a x + ... + a x bi1 1 i2 2 in n i 

Ê   a x + a x + ... + a x - = b ; y 0.i1 1 i2 2 in n i iyi Å surplus variable

By suitably multiplying by (-1) and adjoining slack and surplus variables, any set of linear inequalities can beconverted to standard form if the unknown variables are restricted to be nonnegative.

max 4 x - 3 x + 2 x min -4 x + 3 x - 2 x1 2 3 1 2 3Ä

subject to 3 x + 2 x - x = 4 subject to 3 x + 2 x - x = 41 2 3 1 2 3Ä x + x + 2 x 5 x + x + 2 x + x = 51 2 3 1 2 3 4Ÿ Ä -x + 2 x - x 2 -x + 2 x - x - x = 21 2 3 1 2 3 5  Ä x , x , x 0 x , x , x , x , x 01 2 3 1 2 3 4 5  Ä  

Example 4: Free variables (I)

Suppose x 0 is not present: x may take (+) or (-) values. Substitute x with x = v - u ; v , u 1 1 1 1 1 1 1 1   0. The problem has now (n+1) variables: v , u , x , ..., x .1 1 2 n

Example 5: Free variables (II)

Page 6: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 5Eliminate x using one of the constraint .1 equations

min x + 3 x + 4 x1 2 3 subject to x + 2 x + x = 5 1 2 3 2x + 3 x + x = 6 1 2 3 x , x 02 3  

As x is free, solve for it using the first constraint: x = 5 - 2 x - x . Substitute this in the objective1 1 2 3function and the constraint,

min x + 3 x x + x = 4, x , x 0 ¹š ›2 3 2 3 2 3  

3. EXAMPLES OF LP PROBLEMS

Example 6: The diet problem

To determine the most economical diet that satisfies the basic nutritional requirements for good health.n different foods: i th sells at price c /uniti

m basic nutritional ingredients:healthy diet daily intake for individual at least b units of jth ingredientÊ jeach unit of food i contains a units of j th ingredientji

x : number of units of food i in the diet.i

minimise total cost x = c x + c x + ... + c x0 1 1 2 2 n n subject to a x + a x + ... + a x b11 1 12 2 1n n 1  ã ã ã ã a x + a x + ... + a x bm1 1 m2 2 mn n m 

and nonnegativity of foodquantities 1 2 n x 0 x 0 ... x 0     

Example 7: The transportation problem

Quantities a , a , ..., a of a product are to be shipped from each of m locations and are demanded in1 2 mamounts b , b , ..., b at each of n destinations.1 2 n

c : unit cost of transporting product from origin i to destination jijx : the amounts to be shipped from i to j (i=1, ..., m; j=1, ..., n)ij

Determine x to satisfy shipping requirements and minimise total cost of transportation.ij

minimise c x!i,j

ij ij

subject to x = a (total shipped from i origin; i = 1, ..., m)!j=1

nij i

th

x = b (total required by j destination; j = 1, ..., n)!i=1

mij j

th

x 0 ; i = 1, ..., m ; j = 1, ..., nij  

(For consistency we must also have a = b ).! !i=1 j=1

m ni j

4. BASIC SOLUTIONS

To , considercompute a basic solution the system of equalities

A x = b; x ; b ; A − − −‘ ‘ ‘n m m n‚

Page 7: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 6Select from the n columns of A a set of m . Forlinearly independent columns (exists if rank (A) = m)simplicity, assume that we select the first m columns of A and denote the m m matrix determined by these‚columns by B . B is nonsingular and we may uniquely solve− ‘m m‚

B x = b; x B Bm− ‘

set

x = i.e. ‘ ’ “x the first m components of x are equal to those of x0 the rest are equal to zeroB B

We thus obtain a solution to A x = b.

Definition: Given A x = b, let B be any nonsingular m m matrix made up of the columns of A. If all‚n-m components of x, not associated with the columns of B, are set to zero, the solution to the resulting set ofequations is said to be a (BS) to A x = b, w.r.t. the basis B. The components of x associated withbasic solutioncolumns of B are (BV).basic variables

B is a basis since it consists of m l.i. columns that can be regarded as a basis for . There may not‘m

exist a basic solution to A x = b. To ensure existence we have to assume:

Full rank assumption: The m n matrix A has m n and the m rows of A are linearly independent.‚

Linear dependency among the rows of A either contradictory constraints (there is no solution to AÊx = b : e.g. x + x = 1, x + x = 2) or to a redundancy that can be eliminated (e.g. x + x = 1, 2 x + 2 x = 2).1 2 1 2 1 2 1 2

Under the full rank assumption, A x = b will always have at least one basic solution.

Basic variables in a basic solution are not necessarily nonzero:

Definition: If one or more BV in a BS has zero value, then the solution is a degenerate BS.

There is an ambiguity in degenerate BS since the zero-valued basic and nonbasic variables can beinterchanged.

Definition: x satisfying A x = b x 0 is said to be . A feasible solution that is also basic isand feasible a . If this solution is degenerate, it is called a basic feasible solution (BFS) degenerate BFS.

Example 8:

After adding slack variables to the problem of Example 1, we obtain the following equations which`happen' to form an initial basic representation:

x x x 0 (x = c x)! " # œ 0T

2 x x x 11" # $ œ x 3x x 18 (1)" # % œ x x 4" & œ

in basic representation the variables (i.e. the elements of vector x) are divided into basic variables and non-basicvariables. In the system of equations given by (1) above

the basic variables are x , x , x , x and the non basic var's are x , x .Ö × Ö ×! $ % & " #

Each equation in (1) expresses a particular basic variable as a linear expression in the non-basic variables .

The basic solution of this representation is obtained by setting x 0 for each non-basic variable and thenj œsolving the equations for the remaining BV's:Set x x 0 x 0, x 11, x 18, x 4" # ! $ &œ œ Ê œ œ œ œ4

BS: x , x , x , x , x , x 0, 0, 0, 11, 18, 4 = BFSÊ Ð Ñ œ Ð Ñ! " # $ % &

Looking for a better solution than this, we search for a non-basic variable x such that increasing x (from 0)j jimproves x .!

x x x! #œ 1

Page 8: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 7Ê Ð can increase either x or x increasing both is too complicated). Consider the solutions obtained by" #

increasing x to and leaving x 0. In order to satisfy (1) and stay feasible we must ensure that" #- œ

x ! œ -

x 11 2 0 11/2$ œ   Ê Ÿ- -

x 18 0 18 (2)% œ   Ê Ÿ- -

x 4 0 4& œ   Ê Ÿ- -

We want the best (largest) satisfying (2). As takes values between 0-4, the solution defined by (2) has x - - " œ- - x 0 which corresponds to a point 0S in Figure 2 . The solution given by 4# œ œ

Ð Ñ œ Ð ÑÑ

x , x , x , x , x , x 4, 4, 0, 3, 14, 0(point S in fig. 2

! " # $ % &

This is also a BFS to (1). BV: x , x , x , x and NBV: x , x . Note: in a basic solution, the non-basicÖ × Ö ×! " $ % # &

variables are zero. We need the basic representation, i.e. need to transform (1) so that x , x , x , x are expressed! " $ %

in terms of x , x . We do this by pivoting (to be discussed later) to get# &

x x x 4! # & œ x x 2x 3 (3)# $ & œ 3 x x x 14# % & œ x x 4" & œA solution to (3) is a solution to (1) and conversely.

From x 4 x x we see that to increase x , we should increase x (& keep x 0 .! # & ! # &œ œ Ñ

Set x , x 0. To satisfy (3) and stay feasible the other variables must satisfy# &œ œ-

x 4 ! œ - x 3 0 3$ œ   Ê Ÿ- - x 14 3 0 14/3% œ   Ê Ÿ- - x 4 0" œ  

The best value for 3. As [0, 3], the solution has x 4, x which corresponds to a point on- - -œ − œ œ" #

SR in Figure 2 . The solution given by 3 is:- œ

Ð Ñ œ Ð ÑÑ

x , x , x , x , x , x 7, 4, 3, 0, 5, 0(point R in fig. 2

! " # $ % &

This is also a BFS to (1). the BV: x , x , x , x ; NBV: x , x . The basic representation:Ö × Ö ×! " # % $ &

x x x 7! $ & œ x x 2 x 3# $ & œ 3x x 5 x 5 (4) œ$ % &

x x 4" & œ

A solution to (4) is a solution to (1) and conversely. From x 7 x x , to increase x we should! $ & !œ increase x . Set x x 0 to satisfy (4) and stay feasible, the other variables must satisfy& & $œ œ-

x 7 ! œ -

x 3 2 0 # œ   Ê Ÿ _- -

x 5 5 0 1% œ   Ê Ÿ- -

x 4 0 4" œ   Ê Ÿ- -

Page 9: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 8 1Ê œ-

As takes values from 0 to 1, the solution defined by (12) has x 4 , x 3 2 corresponding to a- - -" #œ œ point on RQ in fig. 2. The solution given by 1:- œ

Ð Ñ œ Ð ÑÑ

x , x , x , x , x , x 8, 3, 5, 0, 0, 1(point Q in fig. 2

! " # $ % &

This is also a BFS to (1). BV : x , x , x , x ; NBV: x , x . Basic representation:Ö × Ö ×! " # & $ %

x x + x 8! $ %# "& & œ

x x x 5 (5)# $ %" #& & œ

x x x 1 œ$ "& &$ % &

x + x x 3" $ %$ "& & œ

A solution of (5) is a solution of (1), and conversely. Thus, any solution to (1) satisfies

x 8 x x! $ %# "& &œ

Any feasible solution has x , x 0 and hence by (5) x 8. 8, 3, 5, 0, 0, 1 has x 8 and so this$ % ! !  Ÿ Ð Ñ œsolution is maximal.

SUMMARY1. Among the FS to min { x = c x | A x = b, x 0 } there is an important subset: .0

T   finite BFS

2. Each BFS is associated with a : A set of equations tobasic representation equivalentmin { x = c x | A x = b, x 0 }0

T   that expresses each BV in terms of the NBV's.

3. By looking at a basic representation . If therewe can see if increasing any NBV will improve the objectiveis one, we can increase it until a new, better, BFS is reached (usual case). If there does not exist such a NBV,we have the optimal solution.

BASIC FEASIBLE SOLUTIONS

Let S 1 ,...., n , I S have m elements. Set of basic variables: x i I x . Let an n i jœ Ö × © Ö ± − × Ö ×!denote the column of A corresponding to x , j S . Associated with I is an m m matrix B B I where thej n− ‚ œ Ð Ñcolumns of B are made up from a i I .Ö ± − ×i

Example 9:

If A 2 4 3 3 1 03 3 4 2 0 11 2 1 2 0 0

œ

Ô ×Õ Ø

Then I 1, 5, 2 B 2 1 43 0 3 1 0 2

œ Ö × Ê œ

Ô ×Õ Ø

I 6, 3, 4 B 0 3 31 4 20 1 2

œ Ö × Ê œÔ ×Õ Ø

The remaining columns a for j I form matrix N and so (after shuffling the columns of A) we mayj Âassume

A B N œ Ò ã Ó

Page 10: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 9we then conformably partition c, x into c , c and x , x respectively. i.e.Ð Ñ Ð ÑB N B N

c (c i I ; c c j I ; x x i I ; x x j I B i N j B i N jœ ± − Ñ œ Ð ± Â Ñ œ Ð ± − Ñ œ Ð ±  Ñ

Example 10: n 6; I 5, 3, 2œ œ Ö ×

c c , c , c ; c c , c , cB Nœ Ò Ó œ Ò Ó& $ # " % '

x x , x , x ; x x ,x , xB Nœ Ò Ó œ Ò Ó& $ # " % '

Given this partition, ¹š ›min x = c x + c x B x N x b; x , x 0 (6,a)!T TB NB N B N B N œ  

x c x c x 0 (6,b)! œT TB NB N

B x N x b (6,c)B N œ

As B is assumed to be nonsingular i.e B exists then a solution to 6,c satisfiesÐ Ñ Ð Ñ"

x B N x B b 7,aB N œ Ð Ñ" "

and conversely. Using (7,a to eliminate x from 6,a yieldsÑ Ð ÑB

x = c B b c c B N x (7,b)!" "T T T

B N B N Š ‹

Note that 7 expresses the BV's (x , x ) in terms of the NBV's x . The vectorÐ Ñ ! B N

r = c c B N (8)T T TN BŠ ‹ "

is the relative (or ) cost vector (for NBV's). It is the components of r that determine which vector can bereducedbrought into the basis.

Example 11:

c ; A 0 ; b ;

6342 2 1 3 2 3 2 1 0 4-3 3 4 2 2 3 0 0 1 2400

œ œ œ

Ô ×Ö ÙÖ ÙÖ ÙÖ ÙÖ ÙÖ ÙÖ ÙÖ ÙÖ ÙÖ ÙÕ Ø

” • ” •

I 4, 3 B ; det B 2 0 B 2 32 2

1 1 1

œ Ö × Ê œ œ Á œ

” • ” •

$#1

Ð Ê œ 7,a) x 7x x x 2x x x 1& $ $# # #" # % & ' ( )

x 5x x 2x x x 2 œ" # $ ' ( )

Ð Ñ Ê œ7,b x 5x 9x 6x 2x x 6! " # & ( )

The Importance of BFS

It is necessary only to consider BFS's when seeking an optimal solution to an LP because the optimalvalue is always achieved at such a solution.

Page 11: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 10Definition: Given an LP in standard form, a feasible solution to the constraints {A x = b; x 0} that 

achieves the minimum value of the objective function subject to those constraints is said to be an optimal feasiblesolution optimal BFS. If this solution is basic then it is an .

Theorem 1: Fundamental theorem of LP

Given an LP in standard form where A is an m n matrix of rank m: ‚(i) if there is a feasible solution, then there is a BFS (see Figure 4 ).(ii) if there is an optimal solution, then there is an optimal BFS (see Figure 5 ).

Theorem 1 reduces the task of solving an LP to that of searching over BFS's. Since for aproblem having n variables and m constraints, there are at most

Š ‹nm m! (n - m)!

n! =

basic solutions (corresponding to the number of ways of selecting m on n columns), there are only a finitenumber of possibilities. Thus Theorem 1 yields an obvious but terribly inefficient way of computing the optimumthrough a finite search technique.

Example 12: for small problem Š ‹nm

m 30 , n 100. as 2.9 x 10 .œ œ œ ¸Š ‹10030 30! 70!

100! #&

This would take approximately two years assuming we could check 10 sets of I / second.'

The set of basic variables: {x | i I } {x } where I S and has m elements.i o n− §

The set of non-basic variables: x I and j 0.j  Á

The BS corresponding to I is given by: i x = 0 for j I and j 0 x 0 (in (7,a))Ð Ñ Â Á Ê œj N (ii) x B b B N x = B bB N

1 1 1œ

This is a feasible solution iff B b 0 in which case it is a BFS.1  

Example 13: A, b, c given as in Example 11, I 4, 3 . BS: x , x , ... , x 6,0,0,2,œ Ö × Ð Ñ œ Ð! " )

Ñ1,0,0,0,0 is not feasible.

Exercise: Find some BFS for this example.

Note The number of distinct BFS's to 1 is the number of sets I S with I m .Ð Ñ Ÿ Ð Ñ œ © ± ± œnm n

Ê Ð Ñ Ð Ñ Ð Ñ Ð Ñ This number is finite. This number is usually because for a given I, i B I may be singular, iinm

the basic solution may not be nonnegative. Also it is possible that two (or more) distinct I , I can lead to the1 #

same BFS:

Example 14:x x 2x 12x x x 2" # $

" # $

œ œ I 1, 2 , I 1, 3 Ÿ 1 œ Ö × œ Ö ×#

Both I and I lead to the same BFS: 1, 0, 0 .1 2 Ð Ñ

Example 15: To demonstrate that we cannot simply state “an LP has an optimal BFS"

Infeasible FR : min x 2x x x x 1 ; x x 2; x , x 0 Ö × œ g œ Ÿ    ¹š ›! " # " # " # " #

Unbounded: max x x x x 1; x , x 0 ¹š ›! " " # " #œ    

We can make x arbitrarily large. i.e. there is no maximum value to x and so no optimal solution." !

Note: The problem min x above has a solution x 0 , x 1 and so unbounded refers to the! " #œ œobjective value and not to the `size' of FR which is unboundedÖ × Ð Ñ

5.THE SIMPLEX ALGORITHM

Page 12: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 11Convention A row is indexed by the basic variable in that row: Indexing the rows of a BV: .

The simplex algorithm is based on the fact that if a BFS x is not optimal, then there is someneighbouring basic solution y with a better objective value. If we examine the sequence of BFS's found whensolving Example 8 we see that the sequence of index sets I (and x ) of basic variables was0

0, 3, 4, 5 0, 3, 4, 1 0, 2, 4, 1 0, 2, 5, 1Ö × Ä Ö × Ä Ö × Ä Ö ×

I I I I" # $ %

Notice that for t 1, 2, 3 ..., I is obtained from I by removing one element and replacing it by a newœ t 1 t

element i.e. I \ I I \ I 1.± ± œ ± ± œt 1 t t t 1

If I, I are such that B I , B I are non-singular, then I, I´ are said to be if I \ I´ I´\w wÐ Ñ Ð Ñ ± ± œ ±neighboursI 1.± œ

Pivots

At each stage of the simplex algorithm we have a basic representation and its BFS. We then use thereduced costs to see if there is some neighbouring representation that has a better solution. Should such aneighbour exist, we construct this representation by pivoting.

Consider the system A x = b (A , m n)− Ÿ‘m n‚

a x + a x + ... + a x = b11 1 12 2 1n n 1 (9)ã ã ã ã a x + a x + ... + a x = bm1 1 m2 2 mn n m

If the equations (9) are linearly independent, we may replace a given equation by a nonzero multiple of itself plusany linear combination of the other equations. This leads to the well known ,Gaussian reduction schemeswhereby multiples of equations are systematically subtracted from one another to yield a canonical form. If thefirst m columns of A are linearly independent, the system (9) can, by a sequence of such multiplications andsubtractions, be converted to the following :canonical form

x + y x + y x + ... + y x = y1 1,m+1 m+1 1,m+2 m+2 1,n n 10

x + y x + y x + ... + y x = y (10)2 2,m+1 m+1 2,m+2 m+2 2,n n 20

ä ã ã ã ã x + y x + y x + ... + y x = ym m,m+1 m+1 m,m+2 m+2 m,n n m0

Example 16:

x + 3 x + x = 11 2 3

eq. 1 - 3 eq. 2Ÿ ‚

x + 3 x = 32 3

x x =- 881 3Ä

x + 3 x = 32 3

According to this canonical representation:

Basic variables: x = y , x = y , ..., x = y Non-Basic variables: x = 0, x = 0, ..., x = 0.

1 10 2 20 m m0m+1 m+2 n

We relax our definition and consider a system to be in canonical form if, among n variables, there are mbasic ones with the property that each appears in only one equation, its coefficient in that equation is unity, andno two of these m variables occur in any one equation. This is equivalent to saying that a system is in canonical

Page 13: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 12form if by some reordering of the equations and variables, it takes the form (10). (10) is also represented by itscorresponding coefficients or tableau:

1 0 ... 0 y y ... y y1,m+1 1,m+2 1,n 10 0 1 ... 0 y y ... y y2,m+1 2,m+2 2,n 20 ã ã ã ã ã ã ã 0 0 ... 1 y y ... y ym,m+1 m,m+2 m,n m0

The question solved by pivoting is this: given a system in canonical form, suppose a non-basic variableis to be made basic and a basic variable is to be made nonbasic. What is the new canonical form corresponding tothe new set of basic variables? The procedure is quite simple. Suppose in (10) we wish to replace the basicvariable x , 1 p m, by the nonbasic variable x . This can be done iff y 0 in (10). It is accomplishedp q pqŸ Ÿ Áby y x dividing the row p by to get unit coefficient for in the p th equation, then subtracting suitablepq qmultiples of row p from each of the other rows in order to get zero coefficient for in all other equations x . Thisqtransforms the q th column of the tableau so that it is zero except its p th entry, which is 1 and does not affect thecolumns of the other basic variables. Denoting the coefficients of the new canonical form by y´ :ij

y´ = y y , i p and y´ = , j = 0, ..., nij ij iq pjy yy y Ápj pj

pq pq (11)

(11) are the pivot equations in LP. y is the pivot element.pq

Example 17: x + x + x x = 51 4 5 6 x + 2 x 3 x + x = 32 4 5 6 x x + 2 x x = 13 4 5 6 Find the basic solution with basic variables x , x , x .4 5 6

x x x x x x 1 2 3 4 5 6 ______________________________________________________________ 1 0 0 1 1 -1 5 0 1 0 2 -3 1 3 0 0 1 -1 2 -1 -1 ______________________________________________________________ 1 0 0 1 1 -1 5 -2 1 0 0 -5 3 -7(replacing x by x as BV)2 5 1 0 1 0 3 -2 4 ______________________________________________________________ 3/5 1/5 0 1 0 -2/5 18/5 2/5 -1/5 0 0 1 -3/5 7/5 -1/5 3/5 1 0 0 -1/5 -1/5 ______________________________________________________________ 1 -1 -2 1 0 0 4 (New basic solution: 1 -2 -3 0 1 0 2 x =4, x =2, x =1)4 5 6 1 -3 -5 0 0 1 1

Page 14: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 13Example 18:

Using the Example 11, I 4,3œ Ö ×

x x 9x 6x 2x x 05 ! " # & ( ) œ

x 7x x x 2x x x 1& $ $# # #" # % & ' ( ) œ

x 5x x 2x x x 2 œ" # $ ' ( )

and pivoting on 4,6 yieldsÐ Ñ

x 5x 9x 6x 2x x 0 ! " # & ( ) œ

x x x x x x x œ& ( " $ " $ "% # # % # % #" # % & ' ( )

x 2x x x x x 1 $ $ "# # #" # $ % & ) œ

which is the basic representation for I + 6 4 6, 3 . œ Ö ×

The starts with a and a and proceeds by a simplex algorithm BFS basic representation sequence ofpivots BFS which is also optimalto find a . For most problems, finding an initial BFS is not easy and this isdiscussed later. However, for those problems in which the constraints are

!j 1

nij j i j

œ

a x b ; i 1 ,..., m ; x 0 ; j 1 ,..., nŸ œ   œ

and where b 0 , i 1, ... , m it is straightforward. On adding , we find thati   œ slack variables, x , ..., xn+1 n m

x c x 0 ; x a x b ; i 1 ,..., m! œ

œ œ œ! !j 1 j=1

n nj j n i ij j i

is itself a basic representation with I n 1, ..., n m and x 0 j=1, ..., n (non basic), x b , i=1,œ Ö × œ œj n i i

..., m (Basic) is feasible b 0.as long as i  

We can now develop the simplex algorithm. Assume that we have some basic representation 7, a - (7,Ð Ñb). Notice that, because of the original problem ( i.e. minimise x = c x, subject to A x = b ; x equivalence 0

T  0 ) and (7, a) - (7, b) have the set of solutions.same

The goal of the simplex algorithm is to produce a basic representation whose basic solution is optimal.This is done by satisfying the conditions 90 Theorem #

Theorem # Ð9:>37+6>CÑ

If r = c N B c 0 (see (7, b)), then the associated basic feasible solution minimizes x .N BT T  

!

Proof

For the basic solution assumed feasiblegiven Ð Ñ

x = c B b + (c c B N )x = c B b0 NT 1 T T -1 T 1B N B B

For solution to (7, a)-(7, b) we haveany otherx = c B b + r x0 N

T -1 TB

r x = r x + r x + ... + r x 0TN m+1 m+1 m+2 m+2 n n  

since r 0 and x 0. It then follows that   N

x´ = c B b + r x c B b = x .0 N 0T -1 T T -1B B 

Page 15: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 14Suppose now that our basic representation does not satisfy the conditions of Theorem 2. In the simplex

algorithm we try to choose a pivot so that new basic representation is (a) feasible and (b) x´ x0 0(unfortunately, because of degeneracy, we can only guarantee x´ x - more on this later .0 0Ÿ Ñ

The motivation for the pivot choice is given in two ways. We assume that one of the elementsof r is negative. At this stage we need to introduce an alternative representation using

x (c c B N )x = c B b0 NT T -1 T 1N B B

x + x = (= c B b)0 0T T 1

B" "

or

x + x = 0 i i 0i=1

n!" "

where and = corresponding element of ( r) i S I (non-basic variables). If the = 0, i I,"i a − "i n a −

basic variables are , then and corresponding to the non-basic variablesx , x , ..., x = = ... = = 01 2 m 1 2 m" " "x , ..., x we have .m+1 n " " "m+1 m+1 m+2 m+2 n n = r , = r , ..., = r

Thus, assume that for some k S I we have . We can first look at the current BFS− n "k 0and examine how increasing x will lead to a better solution.k

Example 19:

x 6x 5x x 26! % & ' œ

x 2x 2x x 7" % & ' œ

x 3x 3x 3x 5# % & ' œ

x 3 x x x 6$ % & ' œ

now 0. We consider increasing x while keeping x x 0. The values of the basic variables will"4 œ œ% & '

become

x 26 6x ! %œ The larger x , the smaller x4 0

x 7 2x" %œ x 5 3x # %œ Must ensure x , x , x 0" # $  

x 6 3x$ %œ

The larger x , the smaller x will become. We must, however, ensure that x , x , x remain non-negative. We% ! " # $

can see that x actually increases and remains non-negative. However, if x 7/2, x# % "will become negative. Thus, the best feasible solution isÐ Ñ

x min , 2 or%( '# $œ œš ›

Ð Ñ œ Ð Ñx , x , x , x , x , x , x 14, 3, 11, 0, 2, 0, 0! " # $ % & '

Of key importance is the fact that this latter solution is also a basic solution. It is that associated with the new -basic representation obtained by a pivot on the circled 3.

Returning to the general case with 0, if we put x 0 and x = 0, j I k , then the" -k k j œ Â Ö ×value of basic variable x must satisfyi

x y y for i I {0}i i0 ikœ − -

in order that 7, a, b still hold.Ð Ñ

Now we have assumed that 0 and so x decreases monotonically as increases. We thus" -k !

increase x as much as possible while ensuring that all variables except x in value. Thek ! remain nonnegative

Page 16: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 15variables which are non-basic (currently) other than x remain at zero. So we have only to consider variableskwhich are currently basic.

Rules (12)

If for that equationy 0 then is unrestrictedik Ÿ -y y y 0, 0 here, i 0i0 ik i0     a   Ð Á Ñ- -

Ê   x 0 no matter how large becomes.i -(Example: x = 5 + 3 x 0 , x = 0.)2 4 4  a  -

If y 0 then y y 0 y /yik i0 ik i0 ik   Í Ÿ- -Ê to ensure that all variables remain non-negative we need only to ensure

- y /y i I such that y 0.Ÿ a − i0 ik ik

As a consequence, we can show:

Theorem 3

If for some basic feasible representation 0 and y 0 for i I then the problem is "k ik Ÿ − unboundedÐ Ñbelow i.e. there is no minimum value for x .!

Proof

From 12 we see that we can make above arbitrarily large and still have a feasible solution. TheÐ Ñ -objective value for this solution is x .! !œ Ä _" " -k

If i I such that y 0 then the best solution is obtained by making as large as possible i.e.b − ik -

min y /y y 0, i I (the ratio test).- œ ± −š ›i0 ik ik

If denotes this value of , then the solution obtained is) -

Ð Ñ œ Ð Ñ œ a − Ð Ñ œ a Â Ö ×i x ; ii x y y , i I {0}; iii x 0, j I k (13)k i i0 ik j) )

Suppose that y /y . Then we see from the pivot formulae 11 that the solution given in (13 is the basic) œ Ð Ñ Ñj j0 ksolution obtained after pivoting on ,k .Ðj Ñ

The pivot choice ,k above has the following characteristics:Ðj Ñ

0 (14, a)"k

y /y min y /y y 0 (14, b)j j0 k i0 ik ikœ ± š ›This choice of pivot can also be justified from the pivot formulae. Assume now that our current BFS is non-degenerate (i.e. y 0, i I). We seek a pivot that produces a new BFS which isi0 −

FEASIBLE and ´ (15)" "! !

Theorem 4

Assuming non-degeneracy, 15 holds iff 14 hold.Ð Ñ Ð Ñ

Proof

For feasibility we must have

i y´ y /y 0Ð Ñ œ  j j j0 0 k

y 0 from BFS; thus (i) is true iff y 0j j0 k 

ii y´ y y y /y 0Ð Ñ œ  i0 i0 ik 0 kŠ ‹j j

and ii holds trivially if y 0 as thenÐ Ñ Ÿik

Page 17: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 16 y´ y 0.i0 i0  

For y 0, ii holds iffik Ð Ñ

y y y /y 0 y /y y /yi0 ik 0 k 0 k i0 ik   Ê ŸŠ ‹j j j j

which justifies 14, b .Ð Ñ

To obtain" " " "´ y /y ,! ! j j !œ k 0 kŠ ‹

(this is an application of the pivot equation to the equation ; specifically, we are evaluating the x + x 0 0T" "œ

new value of ) we must have"0

"k 0 k y /y 0.Š ‹j j

Since y , y 0, this is possible if and only if 14, a holds.j j0 k Ð Ñ

We have so far only considered minimization problems. A maximization problem can be dealtwith by noting that

max x min x! !œ Ð Ñ

or by looking for positive reduced costs rather than negative reduced costs in the simplex method.

In general there will be more than one non-basic variable with 0. One reasonable policy used to "k choose k is

" "k jmax

j .œ š ›i.e. choose the variable which produces the greatest decrease in x per unit increase in the variable.!

The Simplex Algorithm (for minimization problems)

Step 0: Find an initial basic feasible solution and construct its basic representation.

Step 1: If 0 for k I stop, the current basis is optimal. Else"k Ÿ Â

Step 2: If k such that 0 and y 0 for i I, stop. There is no finite minimum. Elseb Ÿ −"k ik

Step3: Choose x such that 0 entry criterion x enters basisk k k" Ð Ñ

Step 4: Let y /y min y /y y 0 Exit criterion x leaves basis.i Ij j j0 k i0 ik ikœ Ð Ñ −

š ›kStep 5: Pivot on y and go to step 1.jk

The procedures Step 1 - 5 define what is called a Simplex iteration.

Iterations can be effectively carried out using a ExtendedSimplex tableau Ð Ñ

Page 18: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 17 Basic Non-Basic

Basic

Variables x ... x ... x ... x R.H.S. 1 i j n

x 0 0 ! !" " "j n

ã ã ã ã ã ã

x 0 1 y y yi ij in i0

ã ã ã ã ã ã

Example 20:

Minimise x 4x 2x x! " # $œ

Subject to x x + x 4" # $ Ÿ

x x 2x 3" # # Ÿ

3x 2x x 12" # $ Ÿ

x , x , x 0" # $  

Adding slack variables , the constraints becomex , x , x% & '

x x x 4" # $ œ x%

x x 2x 3" # $ œ x&

3x 2x x 12" # $ œ x'

x ,..., x 0" '  

We thus have an initial (all slack) basic feasible solution with basic variables x , x , x .% & '

Page 19: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 18

Basic Variables x x x x x x RHS 1 2 3 4 5 6___________________________________________________________________

x 4 2 1 0!

x 1 1 1 1 4%

x 1 1 & ‡ # " $

x 3 2 1 1 12'

___________________________________________________________________

x 6 7 4 12 !

x * 1 % # $ " "

x 1 2 1 31 "

x 5 7 3 1 3 '

___________________________________________________________________

x !% ( & %$$ $ $ $

x * $# " " "$ $ $ $"

x 1 "" # " ""$ $ $ $

x 1 '" ( # #$ $ $ 3

___________________________________________________________________

x 2 3 1 15!

x 1 #$ " " "# # # #

x 1 "" " " (# # # #

x 1 '" & " "# # # #

Finiteness of the Algorithm:

Theorem 5

If all basic solutions are non-degenerate then the simplex algorithm described above must terminateafter a finite number of steps with an optimal solution or with proof that no finite optimum exists.

Proof

Since no basis is degenerate, y 0 at each step and hence ´ at each step (rememberj ! !0 " "Theorem 4) i.e. the sequence of objective values obtained by the algorithm is a strictly monotonically decreasing( ´´ ´ ). Therefore, no basic solution can be repeated." " "0 0 0

Since there are a finite number of basic solutions the process cannot continue indefinitely and so mustterminate at step 1 or step 2 after a finite number of iterations.

Theorem 6

In the absence of degeneracy a condition for a basis to be minimal is that 0.necessary "j Ÿ

Page 20: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 19Proof

(Same as Theorem 2). If 0 for some k, then either there is no finite minimum or by pivoting on"k y defined in step 4, we can strictly reduce the value of the objective function.jk

6. DEGENERACY

We have discussed the simplex algorithm under assumptions of non-degeneracy. We can say that abasic solution is if it has more than n-m zero valued components.degenerate

Lemma

A basic solution x to the LP problem is associated with more than one index set iff it is degenerate(under a 'mild' assumption).

Proof

Suppose first that x can be obtained from I and I where I I . Then x 0 for j S I " # # "1 j nÁ œ − Ð Ñ Ð Ñ œ Á ± ± ÐS I S I I . As I I , I I m and so x is degenerate. Note, for example, thisn n# " # " # " #

means y 0 for i I I using the representation defined by I .i0 œ − Ñ" # "

Suppose now that x is a degenerate basic solution. Let I be an index set which produces x and supposefor some I, y 0 .j − œj0

`Mild' assumption: for each j S there is a solution to LP with x 0.− Án j

Under this assumption, k I such that y 0 otherwise the equation x 0 is part of the representation.b  Á œj jkIf we pivot on ,k the new basic solution is identical to the current one - substitute y 0 into 11 with j Ðj Ñ œ Ð Ñ œj00. Thus x can be produced from index set I and I k . j

How does degeneracy affect the simplex algorithm?

We have seen that if pivot ,k satisfies y 0 then the new basic solution obtained is identical to theÐj Ñ œj0old one. In particular, ´ and the proof of the Theorem finite termination breaks down." "! !œ Ð Ñ

Let us call a pivot ,k degenerate if y 0 and non-degenerate otherwise. An instance of simplexÐj Ñ œj0algorithm can now be decomposed into:

– — – —degenerate pivot degenerate pivot ...sequence of non-degenerate sequence of non-degenerate

pivots pivots..

Note that some or all of these sequences of degenerate pivots may be empty.

Geometrically speaking, the current BFS remains unchanged throughout a sequence of degeneratepivots and then a non-degenerate pivot 'moves us' to a neighbouring BFS.

We know that the number of non-degenerate pivots is . However, suppose that I , I , ... I ,...Ÿ Š ‹nm k" #

denotes a sequence of basic index sets produced during some sequence of degenerate pivots. Suppose that I k œI for 3 cannot be 1 or 2 then, assuming a given index set determines a unique pivot we will havek j j   Ðj Ñ I I I ....k k k 2œ œ œj j

I I I ...k 1 k 1 k 2 1 j jœ œ œ

I I I ...k 2 k 2 k 2 2 j j œœ œ

and so the algorithm will and never terminate.cycle

Page 21: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 20Example 21:

min x x 20 x x 6x! % & ' ($ "% #œ

subject to x x 8 x x 9 x 0" % & ' ("% œ

x x 12 x x 3 x 0# % & ' (" "# # œ

x x 1$ ' œ

We have the following sequence of tableaus choosing and choose first that minimizes" "k j max œ Ð Ñ j

the ratio y /y for y 0.i0 ik ik

BV x x x x x x x RHS " # $ % & ' (

___________________________________________________________________

0x 0 0 0 20 6 !$ "% #

x 1 0 0 8 1 9 0""%

*

x 0 1 0 12 3 0#" "# #

x 0 0 1 0 0 1 0 1$

___________________________________________________________________

x 3 0 0 0 4 33 0!(#

x 4 0 0 1 32 4 36 0% x 2 1 0 0 4 15 0#

$# *

x 0 0 1 0 0 1 0 1$

___________________________________________________________________

x 1 1 0 0 0 2 18 0! x 12 8 0 1 0 8 84 0% *

x 0 0 1 0&" " $ "&# % ) %

x 0 0 1 0 0 1 0 1$

__________________________________________________________________

x 2 3 0 0 0 3 0!"%

x 1 0 0 1 0'$ " #"# ) #

x 0 1 0 0&" " $ $"' ) '% "'

*

x 1 1 0 0 1$$ " #"# ) #

___________________________________________________________________

x 1 1 0 16 0 0 0!"#

x 2 6 0 56 1 0 0'&#

*

x 0 0 1 0(" # " "'$ $ % $

x 2 6 1 56 0 0 1$&#

___________________________________________________________________

x 0 2 0 44 0 0!( "% #

x 1 3 0 20 0 0"& "% #

x 0 0 4 1 0(" " "$ ' '

*

x 0 0 1 0 0 1 0 1$

___________________________________________________________________

x 0 0 0 20 6 0!$ "% #

x 1 0 0 8 1 9 0""%

x 0 1 0 12 3 0#" "# #

*

x 0 0 1 0 0 1 0 1$

Page 22: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 21We can avoid the possibility of cycling by tightening pivot choice rule. There are several possibilities.

We give one of the simplest, prove its validity and then discuss whether in practice any such rule is necessary!

Bland's RuleÐ Ñ œ Ö Á ± ×i Pivot column choice: k min j 0 0 "jÐ Ñ œ Ö × j œ Ö ± œ ×ii Pivot row choice: Let min y /y | y 0 , min i y /y and y 0 3 3i0 ik ik i0 ik ik

Theorem 7 DegeneracyÐ Ñ

With Bland's rule the simplex algorithm cannot cycle and hence is finite.

Degeneracy in Practice

Until recently, cycling only occurred in contrived examples (as the one given above). It has thereforebeen the practice to ignore it in commercial codes.

More recent experience with larger and larger problems indicates that cycling is now considered a rarepossibility. Rigorous methods such as Bland's rule are not satisfactory in practice as they increase in practice thenumber of (or work per) iterations in the vast majority of problems which would not cycle anyway.

It has also been suggested that it is perfectly satisfactory to replace y 0 by y 0 i0 i0œ œ Ð œ% %10 or 10 ) and then continue.# $

7. SPSHADOW PRICES Ð Ñ

Shadow prices are important accounting prices in decision making and in sensitivity analysis.Suppose that we have solved problem

¹š ›min x c x A x = b , x 0 0Tœ  

and found an optimal basis matrix Bx B b 0 The basis is feasible (16, a)B œ   Ð Ñ"

r = c N B c 0 , All reduced costs are non-negative) (16, b)N BT T   Ð

The shadow prices for this problem are defined by B c or = c B If there is more than oneC C Cœ Ð Ñ Ð "T T TB B

optimal basis there may be more than one set of SP.Ñ

These 'prices' give information about the objective value if we alter the RHS of the constraints.Let p denote a general RHS and define the v p : by− Ð Ñ Ä‘ ‘ ‘m mperturbation function

v p min c x A x p ; x 0 (17)Ð Ñ œ œ  ¹š ›T

Thus, solving min { x c x | A x = b , x 0} computes v b to be rigorous v: , 0T mœ   Ð Ñ Ð Ä Ö _ ‘ ‘

_× _ _ Ñ using for unbounded problems and for infeasible problems.

Theorem 8

If B p 0 then v p v b p b"   Ð Ñ œ Ð Ñ Ð ÑCT

Proof

If B p 0 then B is an optimal basis for (17 as 16, b is not affected by changing b to p."   Ñ Ð ÑThus, v p c B p v(p) x (p) = c B p r x = c B pÐ Ñ œ œ T

BT TB B

" Š ‹0 N-1 T -1

c B b c B p bœ Ð ÑT TB B

" "

v b p bœ Ð Ñ Ð ÑCT

This is a local result i.e. p must not differ from b B p 0 . We=?,=>+8>3+66C =9   3=7+38>+38/."

also have the following global result.

Theorem 9

Page 23: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 22v p v b p b ; p Ð Ñ   Ð Ñ Ð Ñ a −C ‘T m

Proof

v p c x A x p Ð Ñ œ minx 0; A x p  œ š Š ‹ ›T TC

c x A x p   minx 0 

š Š ‹ ›T TC

c A x p œ min x 0 

š Š ‹ ›T T TC C

p  CT

As

c A x c c c B B N xxc d c dc d c d ” •T T T T T 1

B N BB

N œ ã ãC

c c c I B N x xx xœ ã ãc d c d” • ” •B N B

T T T 1B B

N N

c x c x + c c B N xœ B B N BT T T T 1

B B Nc d

r x (r 0, x 0)œ    TN N

C C C C CT T T T 1 T TBp b (p b) = c B b (p b) v b p b .œ œ Ð Ñ Ð Ñ

Why is called the vector of SP?C

Suppose b , ... , b represent demands for certain products and c , ... , c are the costs of certain" "m nactivities which produce these products. Suppose there is an increase in demand of for product t i.e. b : b0 t tœ and suppose that a small firm offers to produce the extra demand at price . Should one accept or decide to0 .tproduce more oneself?

Ô ×Ö ÙÖ ÙÖ ÙÖ ÙÖ ÙÖ ÙÖ ÙÖ ÙÕ Ø

Ô ×Ö ÙÖ ÙÖ ÙÖ ÙÖ ÙÖ ÙÖ ÙÖ ÙÕ Ø

Note: p b e , where e t

0 010 0

œ œ Ã

ã

ã

0 t t

Accept the offer total production cost = v b Ê Ð Ñ . 0t

Produce extra total production cost : Ê = v b if B b e 0

v b , in general

Ð Ñ Ð Ñ  

  Ð Ñ

C 0 0

C 0

t t

t

"

Thus, if one should definitely accept the offer. If and if B b e 0 one should. C . C 0t t t t t Ð Ñ  "

definitely reject the offer. In this case is the maximum price one should pay.Ct

Page 24: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 23Maximization Problems

For maximization problems, Theorem 8 is unchanged and the inequality is reversed in the statement ofTheorem 9.

Evaluation of Shadow Prices

In certain circumstances the shadow prices for a particular row can be read off from the final tableau.Suppose that row t was initially a constraint and a slack variable x was added. The objective row coefficientŸ s"s for this variable in the final tableau is given by

" C C Cs s s t tT T c a 0 e œ œ œ

Therefore can be read off. If x is the slack variable for a constraint then we get .C " Ct s s t  œ

Example 22:

Consider Example 20 reading off the final top row coefficients we get

C œ

31

0

Ô ×Õ Ø

Note that in this case 0 which makes sense. If the R.H. sides to 4 , 3 , 12 then theC 0 0 0Ÿ Ð Ñincrease " # $

minimum obtainable objective function value will to 15 15 3decrease œ C 0 C 0 C 0" " # # $ $

0 0 0 0 0" # " # $ for `small' positive , , . Ð Ñ

8. INITIAL BASIC FEASIBLE SOLUTION : The two-phase SIMPLEX

If no BFS is known for the problem one can create one by adding . Previously, inartificial variables ÐSection 4, we constructed a feasible all slack basis. Alternatively, one may know a BFS because a similarproblem has been solved previously. We consider below, the situation when neither is possible .Ñ

Suppose the constraints are

x 2x 3x 6" # $   2x x x 4" # $ œ x 2x x 3" # $ Ÿ x 0, i = 1, ..., 3.i  

After adding slack variables x x we add to give% &, artificials , 0 0" #

x 2x 3x x 6" # $ % " œ0

2x x x 4" # $ # œ0

x 2x x x 3" # $ & œ

x 0, i = 1, ..., 5 ; , 0.i 1 2   0 0

In general, we first add slack variables to obtain equations and then ensure that RHS's b are non-inegative by multiplying through by 1 where necessary.

The aim next is to construct an enlarged system of equalities which is itself a basic representation.Some rows will contain slack variables. Other rows will contain an artificial and some other rows will containboth artificial and slack variables. This produces a basic representation whose BFS consists of artificials andslacks (in rows which do not have an artificial).

In general one needs an artificial variable for each equality constraint and one for each inequality b  iwhere b 0. The basic solution constructed will be feasible as we have ensured non-negative RHS'si

After adding slacks and artificials, we have the augmented system:

x c x 0, Ax I b (18)! œ œTm 0

Page 25: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 24 feasible infeasible Æ Æ .....0.. ... 0 0T Tœ Ò Óp

Clearly, a solution x , to (18) gives a solution to min x = c x A x = b, x 0 iff 0.Ð Ñ   œ* * T *00 0¹š ›

We note that by construction a BFS to (18) is known. The problem of finding a BFS to min { x = c x |0T

A x = b, x 0 } has now been replaced by that of finding a BFS to 18 with 0. To  Ð Ñ œ0do this we solve the linear programming problem

min (19)' œ 0 0 0" # ... a

S.T. x c x 0, Ax b, x, 0! œ œ  T I m 0 0

If a feasible solution to min { x = c x | A x = b, x 0 } exists then the minimum value of is zero with 0T   ' 0"

œ œ œ ... 00a . We can apply the simplex method directly to 19 since by construction we have anÐ Ñinitial BFS to the problem. If having solved (19) we find 0 then current values of x , x , ... x will0 œ ! " n

constitute a BFS to min x = c x A x = b, x 0 .¹š ›0T  

Infeasiblity can be expressed in terms of the initial non-basic variables in the following way.'Suppose that the row containing artificial is0i

0 Di ij j i a x bn

j=1 œ

adding the infeasible rows we obtain' D x a b

i P i P Ð Ñ œ

− −! !j ij i (20)

where P is the set of indices of infeasible rows i.e. those with an artificial variable . (20) expresses in terms ofÐ Ñ 'the non-basic x . The coefficients of x being given by the sum of the coefficients of x in the infeasible rows.j j j(Note that the basic x (the slack variables for the equations that do not need an artificial) do not exist in the rowsiin which an artificial occurs. This is why (20) expresses in terms of the non-basic x .)' j

Example 23:

max x 3x x x! " # $œ

S.T. x x x 10" # $ œ

2x x 2" #  

x 2x x 6; x 0, i = 1, ..., 3." # $ Ÿ  i

adding slacks and artificials where necessary, the constraints become

x 3x x x 0! " # $ œ

x x x 10" # $ " œ0

2x x x 2" # % # œ0

x 2x x x 6" # $ & œ

x 0, i = 1, ..., 5 ; , 0.i 1 2   0 0

Note that artificial columns are ignored after the corresponding variables is made non-basic.

Basic Variables x x x x x RHS " # $ % & " #0 0___________________________________________________________________

Page 26: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 25

3 1 1 12' x 3 1 1 0! 1 1 1 1 100" 2 1 1 1 20# x 1 2 1 1 6& ___________________________________________________________________

1 1 9' " "# #

x 2 1 1 3 !" "# #

1 1 1 90"" "# #

x 1 1"" "# #

x 1 1 1 5&" "# #

___________________________________________________________________

0' x 18!

) #$ $

x 1 6## "$ $

x 1 4"" "$ $

x 2 1 1 14&

___________________________________________________________________ x 4 !

# )#$ $

x 1 #" %"$ $

x 1 1 "" #'$ $

x 2 1 1 14 %

Description of the Two-Phase MethodPhase 1

Step 1: Modify the constraints so that the RHS of each constraint is non-negative. This requires that eachconstraint with negative RHS be multiplied through by ( 1).

Step 1´: Identify each constraint that is now (after Step 1) an equality or constraint. In Step 3 we shall add an artificial variable to such constraints.

Step 2: Convert each inequality constraint to standard form. If i is a constraint, add a slack variable. IfŸconstraint i is a constraint, subtract an excess variable. 

Step 3: If (after Step 1´) constraint i is a or an equality constraint, add an artificial variable to  0iconstraint i.

Step 4: Find the minimum value of using the simplex algorithm. Each excess and artificial variable is'restricted to be 0. 

Phase 1 ends when has been minimized. This phase will result in the following three cases'which are dealt with in Phase 2.

Phase 2

Case 1: 0 . The original LP problem has no feasible solution.'*

Case 2: The optimal value 0 and no artificial variables are in the optimal Phase 1 basis. (The final'* œphase 1 basis contains no basic artificial variables at zero value.) Drop all columns in theÊoptimal Phase 1 tableau that correspond to the artificial variables. We now combine the originalobjective function with the constraints from Phase 1 tableau. The final basis of Phase 1 is the initialbasis of the . The optimal solution to the Phase 2 LP is the optimal solution to thePhase 2 LPoriginal LP problem.

Case 3 The optimal value 0 and at least one artificial variable (at zero value) is in the optimal Phase'* œ1 basis. (When this occurs, it indicates that the original LP had at least one redundant constraint.)Again, continue by optimizing x but have to ensure that no artificial variables becomes non-zero!

again. We note first that we will not make an artificial variable non-zero by allowing it to enter the

Page 27: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 26basis once it becomes non-basic. The problem occurs when x is to enter the basis and y 0k kj where y is the coefficient in column k for some row with an artificial in it. But as y 0j j!k j œ0(the coefficient of the RHS of row ), necessarily we can pivot on y (an unusual pivot). The basicj jksolution does not change but the artificial is pivoted out of the basis. One thus applies the normalsimplex criteria for choice of pivot that the above case y 0 causes a “non-standard"except jk pivot selection.

We can see now how we have sidestepped the earlier assumption that the rows the A matrix werelinearly independent - we have ensured this by adding artificial variables.

If the rows of the original A matrix are in fact linearly dependent then even when a feasible solution isfound there will be artificial basic variables at zero value, of course .Ð Ñ

We briefly discuss why 0 the original LP has no feasible solution and 0 the' '* * Ê œ Êoriginal LP has at least one feasible solution.

Suppose the original LP is infeasible. Then, the only way to obtain a feasible solution to the Phase 1 LPis to let at least one artificial variable to be positive 0 Case 1.Ê Ê'*

If the original LP has a feasible solution, this feasible solution (with all 0) is feasible in the Phase 10i œLP and leads to 0 if the original LP has a feasible solution, the optimal Phase 1 solution will have =' '* *œ Ê0.

Example 24: Case 2 ( )

min x = 2 x + 3 x0 1 2

x + x 4" "# 1 24 Ÿ

x + 3 x 201 2  

x + x 101 2 œ

x , x 01 2  

Steps 1 3 transform the constraints into

x + x + s 4" "# 1 2 14 œ

x + 3 x e 20 (s = slack, e = excess, = artificial)1 2 2 2 œ0 0 x + x 101 2 3 œ0

Step 4 yields the Phase 1 LP min = ' 0 02 3 subject to:

x + x + s 4" "# 1 2 14 œ

x + 3 x e 201 2 2 2 œ0 x + x 101 2 3 œ0

Initial BFS for Phase 1: s 4, 20, 10. and must be eliminated from the objective1 2 3 2 3œ œ œ0 0 0 0function before solving Phase 1:'

Row 0 0' 0 0 œ2 3+ Row 2 x + 3 x 20e1 2 2 2 œ0+ Row 3 x + x 101 2 3 œ0= New Row 0 + 2 x 4 x e 30' 1 2 2 œ

Combining new row 0 with the Phase 1 constraints yields the initial Phase 1 tableau. Since Phase 1 is always aminimisation problem (even if the original LP is a maximisation problem) we enter x into the basis. The ratio2test indicates that x will enter the basis in row 2. Then will exit from the basis.2 20

Page 28: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 27In the second tableau, since 2/3 1/3, x enters the basis. The ratio test indicates that should leave 1 30

the basis. Since and will be nonbasic in the next tableau, we know that the third tableau is optimal Phase 1.0 02 3

BV x x s e RHS ratio1 2 1 2 2 30 0___________________________________________________________________

2 4 -1 30' s 1/2 1/4 1 4 161 1 3 -1 1 20 20/302 1 1 1 10 1003___________________________________________________________________

2/3 1/3 -4/3 10/3' s 5/12 1 1/12 -1/12 7/3 28/51 x 1/3 1 -1/3 1/3 20/3 202 2/3 1/3 -1/3 1 10/3 503___________________________________________________________________

-1 -1 0' s 1 -1/8 1/8 5/8 1/41 x 1 -1/2 1/2 -1/2 52 x 1 1/2 -1/2 3/2 51

'*1 2 1 = 0 Phase 1 concluded. BFS: s 1/4, x 5, x = 5. No artificial variables in the basis: this is anÊ œ œ

example for case 2. We now drop the columns of the artificial variables , (we no longer need them) and0 02 3reintroduce the original objective function:

min x = 2x + 3x or x 2x 3x 0.0 1 2 0 1 2 œ

Since x and x are in the optimal Phase 1 basis, they must be eliminated from Phase 2, row zero (i.e.1 2the objective function x ). This is normally done implicitly, or automatically, as Phase 1 progresses, as in0Example 23. The purpose of this explicit illustration is to highlight the underlying mechanics of the process.

Phase 2 Row 0: x 2 x 3 x 00 1 2 œ+ 3 (Row 2): 3 x e 15‚ œ2 2

3#

+ 2 (Row 3): 2x e 10‚ œ1 2œ œNew Phase 2 Row 0: x e 250 2

"#

We now begin Phase 2 with the following:

min x e 25 0 2 œ"#

s e 1 28 4 œ" "

x e 5 2 2 œ"#

x e 5 1 2 œ"#

This is optimal. In this problem, Phase 2 requires no further pivots. If Phase 2 row 0 does not indicate an optimaltableau, simply continue with the simplex algorithm until an optimal row 0 (i.e. objective function) is obtained.

Page 29: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 28Example 25: Case 1 ( )

min x = 2 x + 3 x0 1 2 x + x 4" "

# 1 24 Ÿ

x + 3 x 361 2   x + x 101 2 œ x , x 01 2  After Steps 1 - 4, min = ' 0 02 3 subject to: x + x + s 4" "

# 1 2 14 œ

x + 3 x e 361 2 2 2 œ0 x + x 101 2 3 œ0

Initial BFS for Phase 1: s 4, 36, 10. Again, and must be eliminated from the1 2 3 2 3œ œ œ0 0 0 0objective function before solving Phase 1:'

Row 0 0' 0 0 œ2 3+ Row 2 x + 3 x 36e1 2 2 2 œ0+ Row 3 x + x 101 2 3 œ0= New Row 0 + 2 x 4 x e 46' 1 2 2 œ

Since 4 2, x enters the basis and replaces . In the second tableau, no variable in Row 0 has a positive 2 30coefficient: optimal Phase 1 tableau with 6 0 no feasible solution to this problem.'* œ Ê

BV x x s e RHS ratio1 2 1 2 2 30 0___________________________________________________________________

2 4 -1 46' s 1/2 1/4 1 4 161 1 3 -1 1 36 1202 1 1 1 10 1003___________________________________________________________________

-2 -1 -4 6' s 1/4 1 -1/4 3/2 1 -2 -1 1 -3 6 03 x 1 1 1 10 2

9. EXTENSIONS OF LP

Some optimization problems can be converted to an LP, a sequence of LP's or be solved by modifyingthe simplex algorithm.

EXTENSION 1: Min-Max with LP

Let c , ... , c and let x max c x t 1,...,p

Ð"Ñ Ð Ñ Ð Ñp n t T− Ð Ñ œ Ð Ñœ

‘ 9 š ›The min-max problem ¹š ›min x A x b; x 0 (21)9 Ð Ñ œ  

can be converted to the LP

min x x c x 0, t 1,..., p; A x b; x 0 (22)¹š ›! !Ð Ñ Ð Ñ   œ œ  t T

Theorem 10

If x , x solve 22 then x solves 21 and x x .Ð Ñ Ð Ñ Ð Ñ œ Ð Ñ* * * * *! ! 9

Page 30: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 29Proof

If x is a feasible solution 21) then x , x is a feasible solution to 22 (since x satisfies Ax b,Ð Ð Ð Ñ Ñ Ð Ñ œ9x 0 and x c x 0, t 1,..., p.)  Ð Ñ Ð Ñ   œ9 Ð Ñt T

Thus, x x which, in particular, implies that x x . But as x c x for t 1, ... ,* * * * t T! ! !

Ð Ñ ‡Ÿ Ð Ñ Ÿ Ð Ñ   Ð Ñ œ9 9p in (22), we have x x and hence x x .* * * *

! !  Ð Ñ œ Ð Ñ9 9

It then follows that x x and x x x for any feasible solution 21 .* * * *! !œ Ð Ñ Ð Ñ   œ Ð Ñ Ð Ñ9 9 9

EXTENSION 2: Min-min problems

Let c , ... . c be as in 1 and letÐ"Ñ Ð Ñp Ð Ñ

< x min c x . t 1, ..., p

Ð Ñ œ Ð Ñœ

š ›Ð Ñt T

We consider the problem

min x A x b; x 0 (23)¹š ›< Ð Ñ œ  

This can be tackled by solving the p LP's:

min c x A x b ; x 0 (24,t)¹š ›Ð Ñ œ  Ð Ñt T

t=1, ..., p. Let x , t 1, ... , p , denote an optimum solution to 24,t and let z c x .Ð Ñ Ð Ñ Ð Ñ Ð Ñt t t T tœ Ð Ñ œ Ð Ñ

Theorem 11

If z min z then x is an optimal solution to 23 and z x .t 1,..., p

Ð Ñ Ð Ñ Ð Ñ Ð Ñ Ð Ñt t t t t* * * *œ Ð Ñ œ Ð Ñ

œš › <

Proof

If x is a feasible solution to 23 then for some qÐ Ñ x c x< Ð Ñ œ Ð ÑÐ Ñq T

z  Ð Ñq

z  Ð Ñt*

Now for t t we haveÁ *

c x zÐ Ñ  Ð Ñ Ð Ñ Ð Ñt T t t*

z  Ð Ñt*

c xœ Ð ÑÐ Ñ Ð Ñt T t* *

and hence x z .< Ð Ñ œÐ Ñ Ð Ñt t* *

EXTENSION 3: Goal Programming and Approximation Problems:

minimise | (c ) x b |!i=1

p(i) T

A typical function (c ) x b may be split into negative and positive parts by writing(i) T

(c ) x b x x ; x , x 0.(i) T + +i i i i œ  

The relationship| (c ) x b | x x (i) T +

i i Ÿ

leads to the formulation

Page 31: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 30

ºš ›!minx, x , x

i-1

p+ (i) T + +i i i i i i+ ( x x ) (c ) x b x x , i = 1, ..., p ; x , x 0 œ  

for x , x , x .− −‘ ‘n + p

Example 26

¹š ›min | x 2 x x | 2 x + 3 x + 4 x 60; 7 x + 5 x + 3 x 105; x , x , x 0 1 2 3 1 2 3 1 2 3 1 2 3      

The LP formulation (written as a max problem)

max ( x x ) +1 1

x 2 x x ( x x ) 01 2 3+1 1 œ

2 x 3 x 4 x 61 2 3   7 x 5 x 3 x 1051 2 3   x , x , x , x , x 01 2 3

+1 1

 

EXTENSION 4: Fractional LP

» Ÿmin A x b; x 0 ! ! !" " "! " "

! " "

x ... x x ... x

n nn n

œ   (25)

Where the set P x Ax b , x 0 is bounded i.e L 0 such thatœ Ö ± œ   × b

P x x L .k© Ö ² ² Ÿ ×

We first make the transformationx for j 1, ... , nj

yyœ œj

!

and assume that y 0. Problem 25 then becomes!   Ð Ñ

» min y y y ... y y y y ... y

b y a y 0, i 1 , ..., ! ! ! !" " " "! ! " " # #

! ! " " # #

!

œ œn n

n n

i ij jn

j=1! m

y 0, y , ... , y 0! "  n Ÿ (26)

Note next that if y , y , ... , y is feasible for (26) then y , y , ... , y is also feasible for any 0 andÐ Ñ Ð Ñ ! " ! "n n- -further has the same objective value as 1 . We can thus restrict our attention to y , y , ... , y satisfyingÐ œ Ñ Ð Ñ- ! " n

" "! ! y ... y 1 or 1 œ n n

Because given an optimal solution to 26 which does not satisfy one of these equations we can positively scale itÐ Ñto one that does. Thus, 26 can be solved by solving the problems.Ð Ñ two

»!

min y y ... y y y ... y

b y a y 0, i = 1, ... , m

y , ... ! ! ! " " " $! ! " "

!

!

œ

œ

n n 0 0 1 1 n n

i ij jn

j=i

y 0

n  Ÿ

(27)

Where in one problem 1 and the other 1. We then choose the better of the two solutions, y , y , ...$ $œ œ Ð * *! "

, y and the y /y , ... , y /y is optimal for 25 . This will only be valid if we know that y 0. Suppose* * * * * *n nÑ Ð Ñ Ð Ñ " ! ! !

that to the contrary y 0. Then setting*! œ

,y

y0 œ ã

Ô ×Õ Ø

*

*n

"

we haveA 0, 0 and 0,0 0 0œ   Á

Page 32: OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING …br/berc/linearprog.pdf · LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING

LP (2003) 31as ... 1 or 1. It follows that if x P then x P for 0. This" 0 " 0 - 0 -" " œ − − n n anycontradicts the fact that P is bounded why? .Ð Ñ