Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

Embed Size (px)

Citation preview

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    1/40

    1

    Chapter 3

    *Deriving Solutions from a Linear Optimization Model

    Learning Objective

    This chapter will enable/equip you to understand the principles that the computer uses to solve or

    arrive at the optimal solution of a linear programming problem. It describes the simplex methodused by most software packages to arrive at an optimal solution of a linear programming

    problem. Various approaches are described as to how one can obtain a starting feasible solution

    to a Linear Programming problem. Conversely, how to find out if a linear programming problem

    has a feasible solution which satisfies all the constraints simultaneously.

    3.1 Introduction to Algorithms

    In the most general sense, an algorithm is any set of detailed instructions which results in a

    predictable end-state from a known beginning. How good the performance of an algorithm is,depends on the instructions given. A pervasive example of an algorithm in todays world is a

    computer program, which consists of a series of instructions in a particular sequence, designed to

    perform a stipulated task. The term algorithm has been coined to honor Al Khwarizmi, who lived

    in Baghdad in the ninth century and wrote a book in Arabic, in which he described (laid down)

    the methodology or procedure for adding, multiplying and dividing numbers, as also for

    extracting square roots and computing . The procedure that he laid down was unambiguous,precise and efficient, and was the precursor of the algorithms which we come across today. The

    widely known Fibonacci numbers are generated by the following simple rule, which is akin to an

    algorithm.

    Algorithms are synonymous with operations research, and quite a few of them are described in

    this book. Most of them are iterative procedures, which keep repeating a series of steps, called

    iterations, for arriving at optimal or near-optimal solutions. Perhaps the most widely applied

    algorithm is the Simplex Method, developed by George Dantzig in 1947, to find optimal

    solutions for linear programming (LP) problems. Subsequently, other algorithms have been

    developed, notably the Ellipsoid algorithm by Khachiyan and Shor in 1979, and the Interior-

    Point algorithm by Narendra Karmarkar in 1984. However, the simplex method has held its own

    for more than six decades, and is still the most commonly used method for solving LP problems.

    3.2 Motivation for studying the Simplex Method

    A large number of software packages are available for using the simplex method to determine

    *Excerpt from the Draft of a BookOperati ons Research: Modeli ng, Computations and

    Applications by Dr. Ranjan Ghosh

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    2/40

    2

    the optimum solution for a LP problem. Hence it is not necessary to carry out computations

    manually for solving a LP problem. More importantly, the volume of computation required to

    solve any practical LP problem is so enormous that it would just not be feasible to perform the

    computations manually. The question that one may legitimately pose is as to why it is necessary

    to understand how the simplex method works. For instance, to drive a vehicle, one does not have

    to know how various parts of the vehicle function. However, to do a more effective job of

    driving, it helps to have a rudimentary level of knowledge of how the engine, the battery, the

    transmission system and the brakes function. In a similar vein, an airplane pilot does not have to

    be an aeronautical engineer. But a pilot does need to have some knowledge of avionics, and the

    various systems in an aircraft such as propulsion, hydraulics and instrumentation. Using the same

    argument, an executive, who wants to apply operations research as an aide to decision-making,

    would be in a better position if he develops an insight into how the various tools and techniques

    of Operations research work, their underlying assumptions, their capabilities and limitations, and

    the interpretation of solutions. It would be pertinent to mention that a manager need not master

    all the underlying theory or be an expert on the subject, but only needs to have appropriate(basic) level of knowledge. An executive, who is involved in some manner in applying linear

    programming, would find it beneficial if he understands how the simplex method works, as it

    will enable him to permit the use of computer-generated solutions to be implemented

    successfully, and to interpret the optimum solution so that it provides him information having

    economic/financial implications, which can be used for decision-making.

    3.3 Transition from Graphical to Algebraic Procedure

    The feasible region of a linear programming problem is defined by the intersection of the half-

    spaces corresponding to the various constraints and is, if bounded, a convex polygon in two

    dimensions and a convex polyhedron in higher dimensions. As the variables are continuous and

    not discrete, the feasible region contains an infinite number of points.

    The graphical method serves to illustrate the nature of the optimal solution, and how to arrive at

    an optimal solution for a linear programming problem. However, the graphical method has a

    severe limitation in that it can be used only when there are two decision variables, as it is not

    possible to draw a graph when there are more than two variables. Therefore, most real-life LP

    problems are not amenable to the graphical procedure as they have many more variables. Arising

    from the observations made in Section 2.11 on the Graphical Solution Method, it follows that a

    possible method or procedure for arriving at an optimal solution would be to carry out a search

    over all feasible corner point solutions. Such a procedure will have to address a number of issues,

    namely, how to identify these corner point feasible (CPF) solutions, and how to find one to start

    with. Furthermore, how to go about searching efficiently so that the number of CPFs considered

    and evaluated are as few as possible. Lastly, there must be a rule which can be applied to find out

    when a CPF solution is optimal so that the search procedure can be stopped.

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    3/40

    3

    In the simplex method, the feasible region or solution space is defined by the set of points which

    satisfy simultaneously the m constraints and the non-negativity restrictions of the n variables.

    Hence the representation is algebraic in nature. The computations which are carried out at each

    iteration are also in the nature of algebraic transformations. The following results are available

    from linear algebra. For a system of linear equations, which are independent and consistent, there

    is a unique solution if the number of equations, m, and the number of variables, n, are equal. For

    instance, the two equations, x + 2y = 9 and 2x + y = 6, for which m = n = 2, there is a unique

    solution, i.e. , x = 1 and y = 2. For a system of linear equations, in which the number of

    equations, m, is less than the number of variables, n, there is an infinite number of solutions. For

    instance, the equation, x + y = 10 has an infinite number of solutions as, in this case, m = 1, n = 2

    and m < n. Any point on the straight line x + y = 10 satisfies the equation, and there are an

    infinite number of such points. When m < n, the feasible region, as defined by the intersection of

    m half-spaces, has a dimension of (nm).

    As described in Chapter 2, the standard form of a linear programming problem in compact

    notation is as follows:

    n

    Maximize Z = cjxj

    j = 1

    subject to

    n

    aijxj bi for i = 1, , m

    j = 1

    xj 0 for j = 1,2, , n

    By adding m slack variables s1, s2, ., sm, to the m constraints, we convert the inequality

    constraints into equalities, and have the following LP.

    n

    Maximize Z = cjxj

    j = 1

    subject to the linear constraints

    n

    aijxj + sn+i = bi for i = 1, , m

    j = 1

    xj 0 for j = 1,2, , n

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    4/40

    4

    In expanded form, we have

    Maximize Z = c1x1+ c2x2 + ..+ cjxj+ .+ cnxn

    subject tolinear constraintsof the following form

    a11 x1+ a12x2 + ..+ a1jxj + . + a1nxn+ s1 = b1

    .

    ai1 x1+ ai2x2 + ..+ aijxj + . + ainxn + s2 = bi

    .

    .

    am1 x1+ am2x2 + ..+ amjxj + . + amnxn + sm = bm

    Non-negative variables

    x1, x2,xj, . xn 0

    Non-negative right hand side constants

    b1, b2, . bi, . bm 0

    Here n is the number of decision variables; m is the number of constraints.

    There is no relationship between n and m.

    The above form of the linear programming problem is referred to as the Canonical Form or

    Augmented Form. It has the following characteristics:

    All constraints are expressed as equalities All variables are restricted to be non-negative Right Hand Sides (RHSs) for all constraints are non-negative Each constraint equation has a basic variable

    For a system of m linear equations having (m + n) variables, we can identify the extreme points

    by setting n variables equal to zero and solving the resulting system of equations. The solution

    corresponds to an extreme point or corner point and is referred to as a basic solutionin the

    parlance of linear programming. These corner points may be feasible or infeasible. In a basic

    solution, the n variables set equal to zero are called non-basic variables, while the remaining m

    variables are called basic variables.

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    5/40

    5

    If all the basic variables for a particular basic solution have non-negative values, the Basic

    Solution is called a Basic Feasible Solution (BFS). The maximum number of corner points is

    fixed by the number of ways in which m or n can be selected out of (m+n), and is given by:

    (m+n)Cm= (m+n)! /(m! n!)

    3.4 Overview of the Simplex Method

    3.4.1 Step 1: Obtaining an Initial Basic Feasible Solution (BFS)

    A basic feasible solution is required to start the algorithm. If the objective function is to be

    maximized and all the constraints are in the nature of less than or equal to inequalities, obtaining

    an initial BFS may be straightforward. If, however, there are equality or greater than or equal to

    constraints in a LPP with the objective function to be maximized, we may have to solve a

    derived LPP to find an initial BFS. Two ways of doing this, the two-phase method and the big-M

    method are described later in Section 3.8. . If no initial BFS can be found, it implies that the LPhas no feasible solution, and the algorithm terminates at this stage.

    3.4.2 Step 2: Test for Optimality

    Determine whether the current BFS or, equivalently, the value of the objective function can be

    improved. If it is not possible to do so, the current BFS is optimal, and the algorithm terminates.

    This test for optimality is done by finding out whether the value of the objective function can be

    increased (if it is to be maximized) or decreased (if it is to be minimized) by increasing the value

    of one non-basic variable from zero to some positive amount. It is to be noted that the coefficient

    of each non-basic variable in Row (Equation) 0 represents the increase for negative coefficients

    or decrease for positive coefficients in the objective function Z for an increase of one unit of the

    associated variable.

    3.4.3 Step 3: Performing an Iteration

    Each iteration has several steps:

    (a)Select the entering basic variable, using the optimality condition.In the graphical procedure, this is equivalent to determining the direction of movement

    along the edge of the feasible region from one extreme point to another. The purpose ofthis step is to select one non-basic variable to increase from zero, while ensuring that the

    values of the basic variables are adjusted to continue satisfying the system of equations.

    Increasing the non-basic variable from zero will convert it to a basic variable for the next

    basic feasible solution (BFS). As it is entering the basis, the variable is called the entering

    basic variable for the current iteration. For a maximization problem, select the non-basic

    variable in Row (Equation) (0), having the largest negative coefficient. In a similar

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    6/40

    6

    manner, for a minimization problem, select the non-basic variable in Row (Equation) (0),

    having the largest positive coefficient. The column associated with the entering basic

    variable is called the pivot column.

    Tie for entering basic variable

    It may happen in some LPPs that at during a particular iteration, there are two candidates

    for the entering basic variable as the current values of the coefficients of those two non-

    basic variables in equation (0) are equal. In such a situation, the choice of the entering

    basic variable is usually made arbitrarily. However, the following rule has been suggested

    for breaking the tie so that the number of iterations required to arrive at the optimal

    solution is minimized. If there is a tie between two decision variables or between two

    slack/surplus variables, the choice can be arbitrary. However, if there is a tie between a

    decision variable and a slack/surplus variable, the choice can be made arbitrarily.

    (b)Select the leaving basic variable, using the feasibility condition.Graphically, this is equivalent to determining where to stop when moving along an edge

    from one extreme point to another and not beyond so that feasibility is maintained. As the

    value of a non-basic variable is increased from zero, the values of some of the basic

    variables change because of the requirement to satisfy the system of equations, that is, the

    feasibility conditions. The additional requirement for feasibility is that all the variables be

    non-negative. To do this, it is to be determined how large the entering basic variable can

    become without violating the feasibility conditions. There is a limit to how much the

    entering basic variable can be increased because, beyond that limit, one of the basic

    variables will become negative and violate the feasibility condition. For this, the currentvalues of the right hand side are divided by the corresponding coefficients in the pivot

    column, provided they are positive. This ratio is usually referred to as the exchange ratio

    and the row having the minimum exchange ratio is selected as the pivot row. These

    calculations are referred to as the minimum ratio test. The objective of this test is to

    determine which basic variable decreases to zero first as the value of the entering basic

    variable is increased. The coefficient common to the pivot column and the pivot row is

    referred to as the pivot element.

    Tie for leaving basic variable (degeneracy)

    (c)Determine the new basic solution by using the appropriate Gauss-Jordancomputations.

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    7/40

    7

    In each iteration of the simplex algorithm, after determining the entering and leaving basic

    variables, some elementary algebraic operations are performed on a system of equations so that a

    non-basic variable becomes a basic variable and a basic variable becomes a non-basic variable.

    The algebraic operations are in the nature of (a) multiplying or dividing an equation by a non-

    zero constant, and (b) adding or subtracting a multiple of one equation to or from another

    equation. Hence the following changes take place with respect to various rows or equations.

    Pivot Row:

    (a)New pivot row = Current pivot row / Pivot element, and(b)The leaving basic variable is replaced with the entering basic variable.

    All other Rows:

    New row = Current row( pivot column coefficient ) x ( new pivot row )

    3.5 Illustration of the Simplex Solution Procedure

    The steps described above for applying the simplex method will now be illustrated by solving the

    following Linear Programming Problem (LPP)

    Maximize Z = 4x1 + 3 x2

    Subject to

    x1+ 2 x2 16

    3x1+ 2x2 24

    x1 6

    and x1 0 , x2 0

    After adding slack variables to the constraints, we have the LPP in canonical form:

    Maximize Z = 4x1 + 3 x2

    Subject to

    x1+ 2 x2+ s1 = 16

    3x1+ 2 x2 + s2 = 24

    x1 + s3 = 6

    and x1 0 , x2 0, s1 0 , s2 0, s3 0

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    8/40

    8

    Representing the objective function as an equation with Z as a basic variable, we have the Linear

    Programming Problem in a form, which is convenient for performing the computations of the

    simplex method:

    Maximize Z - 4x1- 3x20s10s20s3 = 0

    Subject to

    x1+ 2x2+ s1 = 16

    3x1+ 2x2 + s2 = 24

    x1 + s3 = 6

    and x1 0 , x2 0, s1 0 , s2 0, s30

    Here, the basic variables s1, s2and s3 have values of 16, 24 and 6 respectively, and the non-basicvariables x1and x2have a value of zero. In other words, an initial BFS is ( 0, 0, 16, 24, 6 ).

    If x1and x2 are increased from zero, the rate of improvement in Z is 4 and 3 per unit increase inx1and x2respectively. As increase in x1 results in a higher rate of improvement than increase in

    x2,we select x1as the entering basic variable for the current iteration, keeping x2= 0. As x1is

    increased from zero, the requirement to satisfy the functional constraints changes the values of

    the basic variables and keeping in mind the other requirement for feasibility that all the variablesbe non-negative, we have:

    s1= 16 x1 0 => x1 16

    s2= 243 x1 0 => x1 24/3 (= 8)

    s3 = 6 x1 0 => x1 6

    For all the three conditions to hold good, x1 min (16, 8, 6 ) = 6Therefore, the leaving basic variable is s3, and the pivot row is the third functional constraint, the

    pivot element being x1in that equation.

    The series of steps described above for determining the leaving basic variable is usually referred

    to as the minimum ratio rule.

    After performing the Gauss-Jordan computations by pivoting on x1in the third row/equation, we

    obtain:

    1stIteration: x1 enters the Basis, s3leaves the Basis

    Z - 3x2 + 4s3 = 24

    2x2+ s1 - s3 = 102x2 + s2 - 3s3 = 6

    x1 + s3 = 6

    and x1 0 , x2 0, s1 0 , s2 0, s3 0

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    9/40

    9

    It is observed from the above set of equations that the value of Z can be improved by increasing

    from zero the value of the non-basic variable x2. Hence we have not yet arrived at the optimal

    solution and further iterations are to be carried out, with x2being the entering basic variable. On

    applying the same logic as in the first iteration to the above set of equations, we identify s2as the

    leaving basic variable, and x2in the second functional constraint as the pivot element. After

    performing the Gauss-Jordan computations by pivoting on x2in the second equation, we obtain:

    2ndIteration: x2 enters the Basis, s2leaves the Basis

    Z +s2 -

    s3 = 33

    s1 - s2 + 2 s3 = 4

    x2 +1

    2 s2 -

    3

    2 s3 = 3

    x1 + s3 = 6

    and x1 0 , x2 0, s1 0 , s2 0, s30

    It is observed from the above set of equations that the value of Z can be improved by increasing

    from zero the value of the non-basic variable s3. Hence we have not yet arrived at the optimal

    solution and further iterations are to be carried out, with s3being the entering basic variable. On

    applying the same logic as in the first iteration to the above set of equations, we identify s1as the

    leaving basic variable, and s3in the first functional constraint as the pivot element. Afterperforming the Gauss-Jordan computations by pivoting on s3in the first row/equation, we obtain:

    3rd Iteration: s3 enters the Basis, s1leaves the Basis

    Z +s1 +

    s2 = 34

    s1 -

    s2 + s3 = 2

    x2 +

    3

    4s1

    -

    1

    4

    s2 = 6

    x1 -s1 +

    s2 = 4

    and x1 0 , x2 0, s1 0 , s2 0, s30

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    10/40

    10

    On examining the above system of equations, we find that the value of Z cannot be improved any

    further by increasing the values of the non-basic variables s1and s2. We conclude that we have

    arrived at the optimal solution, which is: Z = 34, x1= 4, x2= 6, s1= 0, s2= 0, s3= 2. Hence no

    further iterations are required, and the algorithm terminates at this point.

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    11/40

    11

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    12/40

    12

    3.6 The Simplex Method in Tabular Form

    The algebraic form of the simplex method facilitates the understanding of the logic of the

    algorithm. However, it is not convenient from the point of view of carrying out the required

    computations. It is unnecessary and also cumbersome to write the variables x1, x2 , . xn in

    every iteration.

    Setting up the initial simplex tableau did not involve any computation. The coefficients of the

    constraints or equations were rearranged to form the initial simplex tableau. The tabular form of

    the simplex method records only the essential information pertaining to the current values during

    any iteration of (a) the coefficients of the variables, (b) the constants on the right hand side of the

    equations, and (c ) the basic variables appearing in each equation. The objective function is

    written in the form of an equation, and is referred to as Row or Equation (0). The functional

    rows/equations are numbered from (1) to (m). The non-negativity restriction of the variables is

    not shown, but is implicit. This saves writing the symbols for the variables in each of the

    equations. It highlights the numbers involved in the computation, and recording them in acompact form.

    To compute the profit or cost for each solution and to find out whether the solution can be

    improved upon, we include along with row 0 of the simplex tableau an additional row, to be

    referred to as row zj. The zjrow is not an absolute requirement; it is only to provide some

    additional insight. The value of zjrepresents the amount by which the value of the objective

    function increases (in case of a maximization problem) or decreases (for a minimization

    problem) if one unit of the concerned variable xjis added to the new solution. The current values

    of the negative of the coefficients of the objective function is given in row 0 by (zjcj), which

    may be interpreted as relative profit or relative cost, depending on whether it is a maximization

    problem or a minimization problem. Each of the values in the ( zjcj) row represents the net

    amount of increase (decrease) in the objective function if one unit of the variable represented by

    the column head is incorporated into the solution.

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    13/40

    13

    Starting Tableau: s1, s2, and s3constitute the Basis

    Iteration Row BasicVari-

    able

    Z

    Coefficient of_________________________________________

    x1 x2 s1 s2 s3

    CurrentSolution

    0 zj- cj(row 0)

    zj

    Z 1 - 4 - 3 0 0 0

    0 0 0 0 0

    0

    x1 enters

    s3leaves

    1

    2

    3

    s1

    s2

    s3

    0

    0

    0

    1 2 1 0 - 1

    3 2 0 1 0

    1 0 0 0 1

    16

    24

    6

    Tableau after one Iteration: x1 enters the Basis, s3leaves the Basis

    Iteration Row Basic

    Vari-

    able

    Z

    Coefficient of_______________________________________

    x1 x2 s1 s2 s3

    Current

    Solution

    1 zj- cj(row 0)

    zj

    1 0 - 3 0 0 4

    4 0 0 0 4

    24

    x2 enters

    s2leaves

    1

    2

    3

    s1

    s2

    x1

    0

    0

    0

    0 2 1 0 - 1

    0 2 0 1 - 3

    1 0 0 0 1

    10

    6

    6

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    14/40

    14

    Tableau after two Iterations: x2 enters the Basis, s2leaves the Basis

    Iteration Row BasicVari-

    able

    Z

    Coefficient of________________________________________

    x1 x2 s1 s2 s3

    CurrentSolution

    1 zj- cj(row 0)

    zj

    1 0 0 0 -

    4 3 0 -

    33

    s3 enters

    s1leaves

    1

    2

    3

    s1

    x2

    x1

    0

    0

    0

    0 0 1 - 1 2

    0 1 0 1 -

    1 0 0 0 1

    4

    3

    6

    Tableau after three Iterations: s3 enters the Basis, s1leaves the Basis

    Iteration Row Basic

    Vari-

    able

    Z

    Coefficient of_____________________________________

    x1 x2 s1 s2 s3

    Current

    Solution

    3 zj- cj(row 0)

    zj

    Z 1 0 0

    0

    4 3

    0

    34

    Optimum

    Solution

    1

    2

    3

    s3

    x2

    x1

    0

    0

    0

    0 0 -

    1

    0 1 -

    0

    1 0 -

    0

    2

    6

    4

    It can be observed from the above tableau that the optimal solutionis:

    Z = 34, x1= 4, x2= 6, s1= 0, s2= 0, s3= 2.

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    15/40

    15

    3.7 Artificial Initial Solutions: Modifications for various types of constraints

    The simplex method as described above holds good for the standard form of LP, which

    maximizes Z subject to functional constraints of the less-than-or-equal-to () type, non-

    negativity restrictions on all decision variables and that bi 0for all i = 1, 2, , m.In case of

    any deviations from these conditions, some modifications have to be made during initialization,

    and then the subsequent steps in the simplex method can be applied as described above.

    There is a problem in identifying an initial basic feasible solution (BFS) if there are functional

    constraints of the equality (=) or greater-than-or-equal-to () form, or if there is a negative right-

    hand side. In the LPP solved earlier, the initial BFS was found quite easily by letting the slack

    variables be the initial basic variables which were equal to the non-negative right hand sides of

    their respective equations. The approach adopted in these cases is based on the concept of

    dummy variable or the more commonly used term, artificial variable. This technique constructs

    an auxiliary or artificial problem by incorporating an artificial variable into each constraint that is

    not in standard form. The new variable is introduced only for the purpose of being the initialbasic variable for that equation. The usual non-negativity restrictions are put on these variables,

    and the objective function is re-formulated in terms of these artificial variables so that there is a

    huge penalty for these variables having values larger than zero. If the LP has a feasible solution,

    the iterations of the simplex method will ensure that, one by one, these artificial variables

    become zero and are not to be considered any further. The real LP is then solved, using the initial

    BFS obtained by this procedure.

    Based on the framework described above, To solve find out whether a LPP has a feasible

    solution, two artificial-variable techniques or approaches are available for solving linear

    programming problems, which are not in standard form. They are:

    (a)The Two-Phase Method, and(b)The Big-M Method.

    Most software packages for solving LPPs use the two phase method. However, the Big M

    method is of considerable historical importance. Hence it will also be described.

    3.7.1 The Two-Phase Method

    As the name indicates, this method solves the LP in two phases. The objective of Phase I is to

    find out if there is a feasible solution to the system of functional constraints and, if so, finds aninitial basic feasible solution. If there is no feasible solution, the algorithm stops after Phase I.

    Phase I:The LPP is expressed in canonical (equation) form. In those equations where there is no

    slack variable, artificial variables are added so as to get a starting basic solution. Another LP is

    constructed in which the objective is to minimize the sum of artificial variables, subject to the

    constraints in the original LP. This is irrespective of whether the objective function in the

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    16/40

    16

    original LP was to be maximized or minimized. If the minimum value of the sum of artificial

    variables is zero, the LP has an initial basic feasible solution and hence feasibility, and the

    algorithm proceeds to Phase II. If the minimum value of the sum of artificial variables is ositive,

    the LP has no feasible solution, and the algorithm terminates after Phase I.

    Phase II: The starting basic feasible solution found in Phase I is used to solve the LP with theoriginal objective.

    Consider the following LPP:

    Maximize Z = 4x1 + 3 x2

    Subject to the constraints

    x1+ 2 x2 16

    3x1+ 2x2 = 24

    x1 6

    and x1 0 , x2 0

    There is no need of incorporating (adding) a slack variable in the second constraint, as it is in the

    form of an equality. However, an artificial variable is added to the left hand side of this

    constraint so as to satisfy the requirement of having a basic variable. After adding the appropriate

    slack variables s1and s2, and artificial variable A1, we get

    Maximize Z = 4x1 + 3 x2

    Subject to

    x1+ 2 x2+ s1 = 16

    3x1+ 2x2 + A1 = 24

    x1 + s2 = 6

    and x1 0 , x2 0, s1 0 , s2 0, A1 0

    In the first phase of this method, the sum of the artificial variables, say W, is minimized. If the

    LPP has a feasible solution, that is, all the constraints are satisfied, the minimum value of W will

    be zero. We then proceed to the second phase, in which we start with the final tableau of the first

    phase, but replacing W by Z, the original objective function. If the minimum value of W is not

    zero, the implication is that all the constraints are not satisfied, and hence the LPP does not have

    a feasible solution. The procedure is terminated because of infeasibility.

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    17/40

    17

    In this problem, the objective will be to minimizeW = A1or, equivalently, maximize - A1.

    Hence the problem can be depicted in tabular form as follows:

    Iteration Row Basic

    Variable

    Coefficient of

    W x1 x2 s1 s2 A1

    RightHand

    Side

    0

    0

    1

    2

    3

    W

    s1

    A1

    s3

    1 0 0 0 0 - 1

    0 1 2 1 0 0

    0 3 2 0 0 1

    0 1 0 0 1 0

    0

    16

    24

    6

    As A1is a basic variable, it has to be eliminated from Row 0. To do this, 4 x Row 3 is added to

    Row 0. As we are minimizing (rather than maximizing) W, the non-basic variable in Row (0)

    with the largest positive coefficient, that is, x1is chosen as the entering basic variable.

    Starting Tableau

    Iteration Row Basic

    Variable

    Coefficient of

    W x1 x2 s1 s2 A1

    Current

    Solution

    0

    x1 enters

    s2leaves

    0

    1

    2

    3

    W

    s1

    A1

    s2

    1 3 2 0 0 0

    0 1 2 1 0 0

    0 3 2 0 0 1

    0 1 0 0 1 0

    24

    16

    24

    6

    Tableau after one Iteration:

    Iteration Row Basic

    Variable

    Coefficient of

    W x1 x2 s1 s2 A1

    RightHand

    Side

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    18/40

    18

    1

    x2 enters

    A1leaves

    0

    1

    2

    3

    W

    s1

    A1

    x1

    1 0 2 0 - 3 0

    0 0 2 1 - 1 0

    0 0 2 0 - 3 1

    0 1 0 0 1 0

    6

    10

    6

    6

    Tableau after two Iterations:

    Iteration Row Basic

    Variable

    Coefficient of

    W x1 x2 s1 s2 A1

    Current

    Solution

    2

    FeasibleSolution

    0

    1

    2

    3

    W

    s1

    x2

    x1

    1 0 0 0 0 - 1

    0 0 0 1 2 - 1

    0 0 1 0 -32

    0 1 0 0 1 0

    0

    4

    3

    6

    The above feasible solution can now be used as an initial basic feasible solution for Phase Twoby eliminating the column for A1 and replacing W by Z, the original objective function.

    Initial Tableau for Phase Two after replacing W by Z, and removing the column for A1.

    Iteration Row Basic

    Variable

    Coefficient of

    Z x1 x2 s1 s2

    RightHand

    Side

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    19/40

    19

    0

    0

    1

    2

    3

    Z

    s1

    x2

    x1

    1 - 4 - 3 0 0

    0 0 0 1 2

    0 0 1 0 -32

    0 1 0 0 1

    0

    4

    3

    6

    As x1and x2are non-basic variables, their coefficients in Row 0 must be made zero. To do this,

    3 x Row 0 and 4 x Row 3 are added to row 0.

    Initial Tableau for the Phase Two after modifying Row (0).

    Iteration Row Basic

    Variable

    Coefficient of

    Z x1 x2 s1 s2

    Right

    Hand

    Side

    0

    s2enters

    s1leaves

    0

    1

    2

    3

    Z

    s1

    x2

    x1

    1 0 0 0 -

    0 0 0 1 2

    0 0 1 0 -32

    0 1 0 0 1

    33

    4

    3

    6

    Tableau after one Iteration: s2enters the Basis, s1leaves the Basis

    Iteration Row BasicVariable

    Coefficient of

    Z x1 x2 s1 s2CurrentSolution

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    20/40

    20

    1

    Optimum

    solution

    0

    1

    2

    3

    Z

    s2

    x2

    x1

    1 0 0 0

    0 0 0 1

    0 0 1 0

    0 1 0 - 0

    34

    2

    6

    4

    We observe from the above tableau that the optimal solutionis:

    Z = 34, x1= 4, x2= 6, s1= 0, s2= 2.

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    21/40

    21

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    22/40

    22

    3.7.2 The Big-M Method

    The essence of the Big M method is to construct an artificial linear programming problem

    that has the same optimal value as the original LP. The modifications made in the original LP are

    as follows:

    (a)The LP is expressed in canonical form by incorporating slack variables in constraintsand surplus variables in constraints. Non-negative artificial variables are introduced in

    those equations which do not have slack variables. The artificial variables are only to

    serve the purpose of obtaining an initial basic feasible solution (BFS).

    (b)Assigning a very large penalty to the artificial variables by adding the term -M x (sum ofartificial variables) to the objective function if it is a maximization LP. M is a very large

    positive number, usually 20 times the largest value of any coefficient in the LP. For a

    minimization LP, the term +M x(sum of artificial variables) is added to the objective

    function. Slack and surplus variables in the objective function are assigned a zero

    coefficient.

    (c)The initial basic feasible solution is obtained by assigning a zero value to the originalvariables.

    The simplex method is then applied to the modified LP problem. While carrying out

    iterations, one of the following cases may arise:

    (a)The optimality condition is satisfied with no artificial variable remaining in the basis, thatis, all the artificial variables have a value of zero. It implies that the current solution is an

    optimal basic feasible solution (BFS).

    (b)One or more artificial variables are in the basis with zero value, and the optimalitycondition is satisfied. The current solution is then a degenerate optimal basic feasible

    solution.

    (c)One or more artificial variables appear in the basis with positive values, and theoptimality condition is satisfied. In this case, the original LP has no feasible solution. The

    solution so obtained is referred to as a pseudo-optimal solution, because the solution

    satisfies the constraints but does not optimize the objective function as it contains a very

    large penalty term.

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    23/40

    23

    Example:

    Consider the following LPP:

    Maximize Z = 4x1 + 3 x2

    Subject tox1 + 2 x2 16

    3 x1 + 2 x2 24

    x1 6

    and x1 0 , x2 0

    After adding the appropriate slack variables s1and s3, surplus variable s2, and artificial variable

    A1, we have the LPP in standard form:

    Maximize Z = 4x1 + 3 x2

    Subject to

    x1+ 2 x2+ s1 = 16

    3x1+ 2x2 - s2 + A1 = 24

    x1 + s3 = 6

    and x1 0 , x2 0, s1 0 , s2 0, s3 0, A1 0

    Iteration Row Basic

    Variable

    Coefficient of

    Z x1 x2 s1 s2 s3 A1

    RightHand

    Side

    0

    0

    1

    2

    3

    Z

    s1

    A1

    s3

    1 - 4 - 3 0 0 0 M

    0 1 2 1 0 0 0

    0 3 2 0 - 1 0 1

    0 1 0 0 0 1 0

    0

    16

    24

    6

    As A1 is a basic variable, its coefficient in the objective function must be zero. Hence M times

    Row (2) is subtracted from Row (0) to yield the following initial tableau.

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    24/40

    24

    Iteration Row BasicVariable

    Coefficient of

    Z x1 x2 s1 s2 s3 A1CurrentSolution

    0

    x1enters

    s3leaves

    0

    1

    2

    3

    Z

    s1

    A1

    s3

    1 -( 4+3M) - (3+2M) 0 M 0 0

    0 1 2 1 0 0 0

    0 3 2 0 - 1 0 1

    0 1 0 0 0 1 0

    -24M

    16

    24

    6

    1stIteration

    Iteration Row Basic

    Variable

    Coefficient of

    Z x1 x2 s1 s2 s3 A1

    Current

    Solution

    2

    x2enters

    A1leaves

    0

    1

    2

    3

    Z

    s1

    A1

    x1

    1 0 - (3+2M) 0 M (4+3M) 0

    0 0 2 1 0 - 1 0

    0 0 2 0 - 1 - 3 1

    0

    1 0 0 0 1 0

    24-6M

    10

    6

    6

    2ndIteration

    Iteration Row Basic

    Variable

    Coefficient of

    Z x1 x2 s1 s2 s3 R1

    Current

    Solution

    2

    s2enters

    s1leaves

    0

    1

    2

    3

    Z

    s1

    x2

    x1

    1 0 0 0 -3

    2 -

    1

    2 (3+2M)/2

    0 0 0 1 1 2 - 1

    0 0 1 0 - -

    32

    0 1 0 0 0 1 0

    33

    4

    3

    6

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    25/40

    25

    3rdIteration

    Iteration Row BasicVariable

    Coefficient of

    Z x1 x2 s1 s2 s3 ACurrentSolution

    3

    OptimalSolution

    0

    1

    2

    3

    Z

    s2

    x2

    x1

    1 0 0 0 052 M

    0 0 0 1 1 2 - 1

    0 0 1 0 -

    0

    0 1 0 0 0 1 0

    39

    4

    5

    6

    3.8 Special Cases

    3.8.1 Unbounded Solutions

    It may happen in some LP models that the values of one or more variables can be increased

    indefinitely, indicating that the feasible region or solution space is unbounded. If, while applying

    the simplex method, it is observed that all the coefficients in the column corresponding to some

    non-basic variable are non-positive, the solution space is unbounded. In this case, the non-basic

    variable entering the basis can do so at any value, that is, without any upper limit, resulting in thesolution space becoming unbounded. In such a situation, the optimal solution of the objective

    function need not be unbounded.

    The following is an example of a LP with an unbounded solution.

    Maximize Z = 3x1 + 4 x2

    Subject to

    x1 - x2 1

    x1 6

    and x1 0 , x2 0

    After introducing a slack variable, we have the following tableau for applying the simplex

    method:

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    26/40

    26

    Starting Tableau

    Iteration Row BasicVariable

    Coefficient of

    Z x1 x2 s1 s2

    CurrentSolution

    0

    x2 enters

    No limit

    on x2

    0

    1

    2

    Z

    s1

    s2

    1 - 3 - 4 0 0

    0 1 - 1 1 0

    0 1 0 0 1

    0

    1

    2

    The entries/coefficients in the column corresponding to the entering basic variable x2 are1 and0, that is, they are non-positive. It implies that the slack variables s1and s2do not have an upper

    limit, and the solution space is unbounded.

    The following is an example of a LP in which the solution space is unbounded, but the optimum

    (minimum) value of the objective function is finite.

    Minimize Z = x1 + x2

    Subject to the constraints

    5x1+ 3x2 8

    3x1+ 4x2 7

    and x1 0 , x2 0

    It can be shown by applying the graphical or simplex method that, although the above LP has

    unbounded solution, the optimum or minimum value of Z is 2, occurring at x1= x2=1.

    Unbounded solutions may occur if there is hardly any or no limit on availability of resources. It

    may also arise because of some lacuna in the formulation of the LP.

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    27/40

    27

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    28/40

    28

    3.8.2 Infeasible Solution

    This situation arises when the constraints are inconsistent in the sense that all the constraints are

    not satisfied simultaneously and hence there is no feasible solution. If all the constraints are ofthe less than or equal to () typewith all the coefficients in the Right Hand Side (RHS) being

    non-negative and all the variables being restricted to be non-negative so that there are no

    artificial variables in the standard/canonical form of the LP, there is a feasible solution because

    the slack variables themselves provide such a solution. However, for other types of constraints in

    which we use/introduce artificial variables, there is a possibility of having an infeasible solution.

    There is a feasible solution if all the artificial variables are forced to assume a value of zero

    during the application of the simplex method. If, on the other hand, one or more artificial

    variables remain positive, it indicates that there is no feasible solution. Infeasibility may

    emanate from inability to meet demand or other requirements because of inadequate capacity or

    resources being available. The absence of a feasible solution may also be due to the model notbeing formulated correctly. The following is an example of infeasibility in a LP.

    Maximize Z = 3x1+ 4x2

    subject to

    x1 + x2 1

    2x1+ x2 4

    and x1 0 , x2 0

    After introducing slack variables s1and s2and artificial variable A, we set up the following

    tableau for performing Phase I of the simplex method, in which W, the sum of the artificial

    variables, is to be minimized:

    Starting Tableau

    Iteration Row BasicVariable

    Coefficient of

    W x1 x2 s1 s2 A

    Right

    HandSide

    0

    0

    1

    2

    W

    s1

    A

    1 0 0 0 0 - 1

    0 1 1 1 0 0

    0 2 1 0 - 1 1

    0

    1

    4

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    29/40

    29

    On expressing the objective W in terms of the non-basic variables x1and x2, we get

    Initial Tableau

    Iteration Row BasicVariable

    Coefficient of

    W x1 x2 s1 s2 ACurrentSolution

    0

    x1 enters

    s1leaves

    0

    1

    2

    W

    s1

    A

    1 2 1 0 - 1 0

    0 1 1 1 0 0

    0 2 1 0 - 1 0

    4

    1

    4

    After performing one iteration, we have

    Iteration Row Basic

    Variable

    Coefficient of

    W x1 x2 s1 s2 A

    Current

    Solution

    1

    Pseudo-

    OptimumSolution

    0

    1

    2

    W

    s1

    A

    1 0 - 1 - 2 - 1 0

    0 1 1 1 0 0

    0 0 - 1 - 2 - 1 0

    2

    1

    2

    The optimum or minimum value of W is 2. The artificial variable A (=2) is positive, implying

    that the above LP has no feasible solution.

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    30/40

    30

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    31/40

    31

    3.8.3 Multiple Optimal Solutions (Alternative Optima)

    When the objective function is parallel to a non-redundant constraint, the optimum solution

    occurs at more than one extreme point or basic feasible solution, that is, the objective functionhas the same optimal value at more than one point. This phenomenon is referred to as alternate

    optima. All (linear) convex combinations of these extreme points, that is, all the points on the

    associated hyperplane are also optimal solutions. Hence there is an infinite number of optimum

    solutions.

    Consider the following LPP:

    Maximize Z = 6x1 + 3 x2

    Subject to the constraints

    2x1+ x2 6

    x1+ 3x2 8

    and x1 0 , x2 0

    After introducing a slack variable, we have the following tableau for applying the simplex

    method:

    Starting Tableau

    Iteration Row BasicVariable

    Coefficient of

    Z x1 x2 s1 s2

    CurrentSolution

    0

    x1enters

    s1leaves

    0

    1

    2

    Z

    s1

    s2

    1 - 6 - 3 0 0

    0 2 1 1 0

    0 1 3 0 1

    0

    6

    8

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    32/40

    32

    Second Tableau

    Iteration Row BasicVariable

    Coefficient of

    Z x1 x2 s1 s2

    CurrentSolution

    1

    x2enters

    s2leaves

    0

    1

    2

    Z

    x1

    s2

    1 0 0 3 0

    0 1 1/2 1/2 0

    0 0 5/2 - 1/2 1

    18

    3

    5

    Third Tableau

    Iteration Row Basic

    Variable

    Coefficient of

    Z x1 x2 s1 s2

    Current

    Solution

    1

    AlternateOptima

    0

    1

    2

    Z

    x1

    x2

    1 0 0 3 0

    0 1 1/2 3/5 - 1/5

    0 0 1 - 1/5 2/5

    18

    2

    2

    Corresponding to the two basic feasible solutions, x1= 3, x2 = 0, and x1= 2, x2 = 2, we get Z = 18

    as the optimal or maximum value. Any convex combination of these two alternate optima, such

    as the point mid-way between the two, x1= 5/2, x2 = 1, which is not a basic feasible solution, is

    also optimal as the associated value of Z = 18. As the variables in a LP are continuous and not

    discrete, there are an infinite number of points which are convex combinations of the alternateoptima, and hence there are an infinite number of optimal solutions. This provides greater

    flexibility to the management of a firm as they can adopt the value which suits them best.

    3.9 Concluding Remarks

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    33/40

    33

    Self-Test

    True or False

    1. The non-negativity conditions imply that all decision variables must be positive.

    2. The most frequent objective of business firms is to minimize operational expenses.

    3. In the context of modeling, restrictions on the decisions that can be taken are called

    constraints.

    4. A linear programming models constraints are almost always nonlinear relationships that

    describe the restrictions placed on the models decision variables.

    5. The optimal solution of a linear programming model will occur at least one extreme point.

    6. The simplex method for solving linear programming problems is partially based on the

    solution of simultaneous equations and matrix algebra.

    7. All the constraints in a linear programming problem are inequalities.

    8. The feasible solution space contains the values for the decision variables that satisfy the

    majority of the linear programming models constraints.

    9. The objective function of a cost minimization model need only consider variable, as opposed

    to sunk, costs.

    10. Since fractional values for decision variables may not be physically meaningful, in practice(for the purpose of implementation), we sometimes round the optimal linear programming

    solution to integer values.

    Multiple Choice Questions

    1. The simplex method is

    a. a mathematical procedure for solving a linear programming problem according to aset of stepsb. a closed-form solution to a linear programming problem

    c. a graphical solution technique for solving linear programming problemsd. an analytical technique for solving linear programming problems.

    2. Which of the following would cause a change in the feasible region?

    a. increasing the value of a coefficient in the objective function of a minimization problem

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    34/40

    34

    b. decreasing the value of a coefficient in the objective function of a maximization problemc. changing the right hand side of a non-redundant constraintd. adding a redundant constraint

    3. In linear programming, extreme points are

    a. variables representing unused resourcesb. variables representing an excess above a resource requirementc. all the points that simultaneously satisfy all the constraints of the modeld. corner points on the boundary of the feasible solution space

    4. Every extreme point of the feasible region is defined by

    a. some subset of constraints and non-negativity conditionsb. the intersection of two constraintsc. neither of the aboved. both a and b

    5. In solving a linear programming problem, the condition of infeasibility occurred. This

    problem may be resolved by

    a. trying a different software packageb. removing or relaxing a constraintc. adding another constraintd. adding another variable

    6. A linear programming problem in standard form has m constraints and n variables. The

    number of basic feasible solutions will be

    a. nm

    b. ( )

    c. ()d. none of the above

    7. Which of the following statements is true of an optimal solution to a linear programming

    problem?

    a. The optimal solution always occurs at an extreme point.b. If an optimal solution exists, there will always be one at an extreme point.c. Every linear programming problem has an optimal solution.d. The optimal solution uses up all the resources.

    8. If the feasible region gets larger due to a change in one of the constraints, the optimal value

    of the objective function

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    35/40

    35

    a. must increase or remain the same for a maximization problemb. must decrease or remain the same for a maximization problemc. must increase or remain the same for a minimization problemd. cannot change

    9. If, in any simplex iteration, the leaving rule is violated, then the next table will

    a. Give a non-basic solutionb. Not give a basic solutionc. Give a basic or a non-basic solutiond. Give a basic solution which is not feasible

    10. The graphical approach to solving linear programming problems in two dimensions is

    useful because

    a. it solves the problem quicklyb. to it provides a general method of solving a linear programming problemc. it gives geometric insight into the model and the meaning of optimalityd. all of the above

    11. If, in any simplex iteration, the minimum ratio rule fails, then the linear programming

    problem

    a. infeasible solutionb. degenerate basic feasible solutionc. non-degenerate basic feasible solutiond. unbounded solution

    12. If, in phase I of the two-phase simplex method, an artificial variable turns out to be positive

    in the optimal table, then the linear programming problem has

    a. unbounded solutionb. no feasible solutionc. optimal solutiond. none of the above

    13. If, in a simplex tableau, there is a tie for the leaving variable, then the next basic feasible

    solution

    a. will be degenerateb. will be non-degeneratec. may be degenerate or non-degenerated. does not exist

    14. In a maximization LP, a non-basic variable with the most negative value of ( zj - cj )

    entering the basis ensures

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    36/40

    36

    a. that the next solution will be a basic feasible solutionb. largest decrease in the objective functionc. largest increase in the objective functiond. none of the above

    15. When alternative optimal solutions exist in an LP problem, then

    a. one of the constraints will be redundantb. the objective function will be parallel to one of the constraintsc. the problem will be unboundedd. two constraints will be parallel

    Discussion Questions

    1. Define the following in the context of linear programming:

    (a) slack variable

    (b) surplus variable

    (c) artificial variable

    2. Develop your own set of constraint equations and inequalities and use them to illustrate

    graphically each of the following conditions:

    a. an infeasible problemb. a problem containing redundant constraintsc. an unbounded problem

    3. What is meant by an algorithm? Describe briefly the various steps involved in the simplex

    method. Why is it referred to as the simplex algorithm?

    4. It has been said that each linear programming problem that has a feasible region has an

    infinite number of solutions. Explain.

    5. Describe the two-phase method of solving linear programming problems.

    6. What is the significance of the ( zj - cj ) numbers in the simplex tableau?

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    37/40

    37

    Problem Set

    1. Work through the simplex method (in algebraic form) step by step to solve the following

    linear programming problems:

    (a) Maximize Z = x1+ 2x2+ 2x3,

    subject to

    5x1+ 2x2+ 3x3 15

    x1+ 4x2 + 2x3 12

    2x1 + x3 8

    and x1 0,x2 0, x3 0.

    (b) Maximize Z = 2x1- 3x2

    subject to

    -x1+x2 2

    2x1- x2 2

    -x1- x2 2

    and x1 0, x2 unrestricted.

    2. Use the two phase method to solve the following linear programming problems:

    (a)Maximize Z =- 3x1+x2+ x3,subject to

    x1- 2x2+ x3 11

    - 4x1+ x2 + 2x3 3

    - 2x1 + x3= 1

    and x1 0, x2 0, x3 0.

    (b) Minimize Z = 3x1+ x2subject to

    5x1+ 10x2 - x3 = 8

    x1+ x2 + x4 = 1

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    38/40

    38

    and x1, x2, x3,x4 0.

    3. Use the Big M (penalty) method to solve the following linear programming problem:

    Maximize Z = 5x1+ 2x2+ 10x3,

    subject to

    x1 - x3 10

    x2 + x3 10

    and x1 0, x2 0, x3 0.

    4. Solve the following linear programming problem, using the two phase method and the Big M

    method separately.

    Maximize Z = 3x1- 3x2+ x3,

    subject to

    x1+ 2x2 - x3 5

    - 3x1 - x2 + x3 4

    and x1 0, x2 0, x3 0.

    5. Consider the system of inequalities:

    x1+ x2 1

    - 2x1+ x2 2

    2x1+ 3x2 7

    and x1 0, x2 0

    Use the simplex algorithm to find

    (a)a Basic Feasible Solution, and(b)a Basic Feasible Solution in which bothx1andx2are Basic Variables.

    6. Solve the following linear programming problem to show that it has no feasible solution.

    Maximize Z = 4x1+ x2 + 4x3 + 5x4

    subject to

    4x1+ 6x2 - 5x3 + 4x4 - 20

    3x1- 2x2 + 4x3 + x4 10

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    39/40

    39

    8x1- 3x2 - 3x3 + 2x4 20

    and x1, x2, x3,x4 0

    7. Solve the following linear programming problem to find out whether it has an unbounded

    solution.

    Maximize Z = 10x1-x2+ 2x3,

    subject to

    14x1+ x2 - 6x3 + 3x4 = 7

    16x1+ 0.5x2 - 6x3 5

    3x1 - x2 - x3 0

    and x1 0, x2 0, x3 0.

    8. Does there exist an alternative optimal solution to the following linear programming

    problem? If yes, find the solution.

    Maximize Z = 6x1- 3x2

    subject to

    x1+x2 6

    2x1+x2 8

    x2 3

    and x1, x2 0

    Selected References

    1. Anderson, D. R., D. J. Sweeney, and T. A. Williams, An I ntroduction to ManagementScience, 10thEdition, Thomson Asia Pvt. Ltd., Singapore, 2003.

    2. Bradley, S. P., A. C. Hax, and T. L. Magnanti, Applied Mathematical Programming,Addison-Wesley Publishing Co., 1977.

    3. Dantzig, George B., L inear Programming and Extensions, Princeton University Press,1963.

    4. Dantzig, George B., and M. Thapa, L inear Programming I : I ntroduction, Springer, NewYork, 1997.

  • 8/14/2019 Chapter 3 - Deriving Solutions from a Linedeear Optimization Model.pdf

    40/40

    5. Hillier, F. S., and G. J. Lieberman, I ntr oduction to Operati ons Research, 9thEdition,McGraw-Hill Publishing Company Ltd., New York, 2010.

    6. Simmonard, M., L inear Programming, Prentice-Hall International, Inc., 1966.7. Taha, H. A., Operations Research: An I ntr oduction, 8thedition, Pearson Prentice Hall,

    Delhi, 2009.

    8. Vajda, S., Mathematical Programming, Addison-Wesley Publishing Co., 1971.9. Wagner, Harvey M., Principles of Operati ons Research, 2ndEdition, Prentice-Hall of

    India.