OD Nonlinear Programming 2010

Embed Size (px)

Citation preview

  • 7/24/2019 OD Nonlinear Programming 2010

    1/39

    1

    NONLINEARPROGRAMMING

    Nonlinear Programming

    Linear programming has a fundamental role in OR.

    In linear programmingal l its functions(objective

    function and constraint functions) are linear.

    This assumption frequently does not hold, and

    nonlinear programming problems are formulated:

    Findx = (x1,x2,...,xn) to

    Maximize f(x)

    subject to

    gi(x) bi , for i = 1, 2, ...,m

    andx0

    331

  • 7/24/2019 OD Nonlinear Programming 2010

    2/39

    2

    Nonlinear Programming

    There are many types of nonlinear programming

    problems, depending onf(x) andgi(x).

    Different algorithms are used for different types.

    Some problems can be solved very efficiently, whilst

    others, even small, can be very difficult.

    Nonlinear programming is a particularly large subject

    (all the animals that are not elephants).

    Only some important types will be dealt with here.

    332

    Application: product-mix problem

    Inproduct -mixproblems (as Wyndor Glass Co.) the

    goal is to determine optimal mix of production levels.

    Sometimesprice elast icityis present: the amount of

    sold product has an inverse relation to price charged:

    333

  • 7/24/2019 OD Nonlinear Programming 2010

    3/39

    3

    Price elasticity

    p(x) is the price required to sell x units.

    c is the unit cost for producing and distributing

    product.

    Profit from producing and sellingx is:

    P(x) = xp(x) cx

    334

    Product-mix problem

    If each product has a similar profit function, overall

    objective function is

    Other nonlinearity:marginal costvary with productionlevel.

    It may decrease when production level is increased due

    to thelearning-curve effect.

    It may increase due to overtime or more expensive

    production facilities when production increase.

    335

    1

    ( ) ( )n

    j j

    j

    f P x

    x

  • 7/24/2019 OD Nonlinear Programming 2010

    4/39

    4

    Application: transportation problem

    Determine optimal plan for shipping goods from

    various sources to various destinations (see P&T

    Company problem).

    Cost per unit shippedmay not be fixed.Volume

    discountsare sometimes available for large shipments.

    Marginal costcan have a pattern like in the figure.

    Cost of shippingx units is a piecewise linear function

    C(x), with slope equal to the marginal cost.

    336

    Volume discounts on shipping costs

    Marginal cost Cost of shipping

    337

  • 7/24/2019 OD Nonlinear Programming 2010

    5/39

    5

    Transportation problem

    If each combination of source and destination has a

    similar shipping cost function, so that

    cost of shippingxij units from sourcei (i = 1, 2, ...,m)

    to destinationj (j = 1, 2, ...,n) is given by a nonlinear

    functionCij(xij).

    The overall objective function is

    338

    1 1Minimize ( ) ( )

    m n

    ij iji j

    f C x x

    Graphical illustration

    Example: Wyndor Glass Co. problem with constraint

    339

    2 2

    1 29 5 216x x

  • 7/24/2019 OD Nonlinear Programming 2010

    6/39

    6

    Graphical illustration

    Example: Wyndor Glass Co. problem with objective

    function

    340

    2 2

    1 1 2 2126 9 128 13Z x x x x

    Example: Wyndor Glass Co. (3)

    341

  • 7/24/2019 OD Nonlinear Programming 2010

    7/39

    7

    Global and local optimum

    Example:f(x) with three local maxima (x = 0, 2,4), and

    three local minima (x = 1, 3, 5). Global?

    342

    Guaranteed local maximum

    Global maximum when:

    Function always curving downward is a concave

    function (concave downward).

    Function always curving upward is a convex function

    (concave upward).

    343

    2

    2

    ( )0, for all

    f xx

    x

  • 7/24/2019 OD Nonlinear Programming 2010

    8/39

    8

    Guaranteed local optimum

    Nonlinear programming with no constraints and

    concaveobjective function, a local maximum is the

    global maximum.

    Nonlinear programming with no constraints and

    convexobjective function, a local minimum is the

    global minimum.

    With constraints, this guarantee still holds if the

    feasible regionis a convex set.

    344

    Ex: Wyndor Glass with one concavegi(x)

    345

  • 7/24/2019 OD Nonlinear Programming 2010

    9/39

    9

    Types of NP problems

    Unconstrained Optimization: no constraints

    necessarycondition for a solution x* =x to be optimal:

    whenf(x) is aconcavefunction this condition is

    sufficient.

    whenxj has a constraintxj0, sufficient conditionchanges to:

    346

    Maximize ( )f x

    *( ) 0 at , for 1,2, ,j

    fj n

    x

    xx x

    * *

    * *

    0 at , if 0( )

    0 at , if 0

    j

    jj

    xf

    xx

    x xx

    x x

    Example: nonnegative constraint

    347

  • 7/24/2019 OD Nonlinear Programming 2010

    10/39

    10

    Types of NP problems

    Linearly Constrained Optimization

    All constraints are linear and objective function is nonlinear.

    Special case: Quadratic Programming

    Objective function isquadratic.

    Many applications, e.g. portfolio selection, predictive control

    Convex Programming assumptions for maximization

    f(x) is aconcavefunction.

    Eachgi(x) is aconvexfunction.

    For a minimization problem,f(x) must be a convex function.

    348

    Types of NP problems

    Separable Programming convex programming where

    f(x) andgi(x) are separable functions.

    A separable function is a function where each term

    involves only a single variable:

    Nonconvex Programming: local optimum is not

    assured to be a global optimum.

    349

    1

    ( ) ( )

    n

    j j

    j

    f f x

    x

  • 7/24/2019 OD Nonlinear Programming 2010

    11/39

    11

    Types of NP problems

    Geometric Programming is applied to engineering

    design as well as economics and statistics problems.

    Objective function and constraint functions are of the

    form:

    ci andaij are typically physical constraints.

    When they are all strictly positive, functions are

    generalized positive polynomials (posynomials), and a

    convex programming algorithm can be applied.

    350

    1 2

    1 2

    1

    ( ) ( ), where ( ) i i inN

    a a a

    i i i n

    i

    g c P P x x x

    x x x

    Types of NP problems

    Fractional Programming

    maximizesf1(x) and minimizes f2(x).

    whenf1(x) andf2(x) are linear:

    can be transformed into a linear programming problem.

    351

    0

    0

    ( ) c

    fd

    cxx

    dx

    1

    2

    ( )Maximize ( )

    ( )

    ff

    f

    xx

    x

  • 7/24/2019 OD Nonlinear Programming 2010

    12/39

    12

    One-variable unconstrained optimization

    Methods for solving unconstrained optimization with

    only one variable (n = 1), where the differentiable

    functionf(x) isconcave.

    Necessary and suf ficientcondition for optimum:

    352

    *( )

    0 at .

    f x

    x xx

    Solving the optimization problem

    Iff(x) is not simple, problem cannot be solved

    analytically.

    If not, search procedures can solve the problem

    numerically.

    We will describe two methods:

    Bisection method

    Newtons method

    353

  • 7/24/2019 OD Nonlinear Programming 2010

    13/39

    13

    Bisection method

    We know that:

    If derivative ofxisposit ive,x is alower boundofx

    *

    . If derivative ofxisnegative,x is anupper boundofx*.

    354

    *( ) 0 if ,df x

    x xdx

    *( ) 0 if ,df x

    x xdx

    *( ) 0 if .df x

    x xdx

    Bisection method

    Having:

    In the bisection method, new trial solution is the

    midpont between the two current bounds.

    355

    *

    *

    *

    = current trial solution,

    = current lower bound on ,

    = current upper bound on ,

    = error tolerance for .

    x

    x x

    x x

    x

  • 7/24/2019 OD Nonlinear Programming 2010

    14/39

    14

    Algorithm of the Bisection Method

    Initialization:Select. Find initial upper and lower

    bounds. Select initial trial as:

    Iteration:

    1. Evaluate

    2. If

    3. If

    4. Select a new Stopping rule:If stop. Otherwise, go to 1.

    356

    2

    x xx

    *( ) at ,df x

    x xdx

    ( )

    0, reset ,df x

    x xdx

    ( ) 0, reset ,

    df xx x

    dx

    2

    x x

    x

    2x x

    Example

    Maximize

    357

    4 6 ( ) 12 3 2f x x x x

  • 7/24/2019 OD Nonlinear Programming 2010

    15/39

    15

    Solution

    First two derivatives:

    358

    3 5( ) 12(1 )df x

    x xdx

    22 4

    2

    ( ) 12(3 5 )

    d f xx x

    dx

    Iteration x )/ x x x Newx x)

    0 0 2 1 7.0000

    1 12 0 1 0.5 5.7812

    2 +10.12 0.5 1 0.75 7.6948

    3 +4.09 0.75 1 0.875 7.8439

    4 2.19 0.75 0.875 0.8125 7.8672

    5 +1.31 0.8125 0.875 0.84375 7.8829

    6 0.34 0.8125 0.84375 0.828125 7.8815

    7 +0.51 0.828125 0.84375 0.8359375 7.8839

    Solution

    x* 0.836

    0.828125

  • 7/24/2019 OD Nonlinear Programming 2010

    16/39

    16

    Newtons method

    This method approximatef(x) in neighborhood of

    current trial solution by a quadratic function.

    This quadratic approximation uses Taylor series

    truncated after second derivative term:

    Maximized by settingf (xi+1) equal to zero (xi,f(xi) and

    f (xi) are constants):

    360

    2

    1 1 1

    ( )( ) ( ) ( )( ) ( )

    2i

    i i i i i i i

    f xf x f x f x x x x x

    1 1( ) ( ) ( )( ) 0i i i i if x f x f x x x

    1

    ( )

    ( )i

    i i

    i

    f xx x

    f x

    Algorithm of Newtons Method

    Initialization:Select. Find initial trial solutionxi by

    inspection. Seti = 1.

    Iteration i:

    1.

    2. Set

    Stopping rule:If stop;xi+1 is optimal.

    Otherwise, i =i + 1 (another iteration).

    361

    Calculate ( ) and ( ).i if x f x

    1

    ( ) .

    ( )i

    i i

    i

    f xx x

    f x

    1 ,i ix x

  • 7/24/2019 OD Nonlinear Programming 2010

    17/39

    17

    Example

    Maximize again

    New solution is given by:

    Selectingx1 = 1, and= 0.00001,

    362

    4 6 ( ) 12 3 2f x x x x

    3 5 3 5

    1 2 4 2 4

    ( ) 12(1 ) 1

    ( ) 12(3 5 ) 3 5i

    i i i i

    i

    f x x x x xx x x x

    f x x x x x

    Iteration i x

    i

    x

    i

    ) x

    i

    ) x

    i

    ) x

    i

    +1

    1 1 7 12 96 0.875

    2 0.875 7.8439 2.1940 62.733 0.84003

    3 0.84003 7.8838 0.1325 55.279 0.83763

    4 0.83763 7.8839 0.0006 54.790 0.83762

    Multivariable unconstrained optimization

    Problem: maximizing aconcavefunctionf(x) of

    multiple variables x = (x1,x2,...,xn) with no constraints.

    Necessary and sufficient condition for optimality:

    partial derivatives equal to zero.

    No analytical solution numerical searchprocedure

    must be used.

    One of these is thegradient search procedure:

    It identifies the direction that maximizes the rate at

    whichf(x) is increased.

    363

  • 7/24/2019 OD Nonlinear Programming 2010

    18/39

    18

    Gradient search procedure

    Use values of partial derivatives to select the specific

    direction to move, using the gradient.

    Gradient of a point x =x is thevectorwithpartial

    derivativesevaluated atx =x:

    Moves in the direction of this gradient untilf(x) stops

    increasing. Each iteration changes the trial solutionx:

    364

    1 2

    ( ) , , , atn

    f f ff

    x x x

    x x x

    *Next ( )t f x x x

    Gradient search procedure

    wheret* is the value oftthatmaximizesf(x+tf(x)):

    The functionf(x +tf(x)) is simplyf(x) where:

    Iterations continue untilf(x) = 0 withtolerance:

    365

    , for 1,2, , .j

    fj n

    x

    , for 1,2, ,j jj

    fx x t j n

    x

    x x

    *

    0 ( ( )) max ( ( ))

    tf t f f t f

    x x x x

  • 7/24/2019 OD Nonlinear Programming 2010

    19/39

    19

    Summary of gradient search procedure

    Initializat ion:Selectand any initial trial solution x.

    Go to stopping rule.

    Iterat ion:

    1. Expressf(x +tf(x)) as a function oftby setting

    and substitute these expressions into f(x).

    366

    , for 1,2, ,j j

    j

    fx x t j n

    x

    x x

    Summary of gradient search procedure

    Iterat ion (concl.):

    2. Use search procedure to findt =t* that maximizes

    f(x +tf(x)) overt0.

    3. Resetx =x +t* f(x). Go to stopping rule.

    Stopping rule:Evaluatef(x) atx =x. Check if:

    If so, stop with current x as the approximation ofx*.

    Otherwise, perform another iteration.

    367

    , for 1,2, , .j

    fj n

    x

  • 7/24/2019 OD Nonlinear Programming 2010

    20/39

    20

    Example

    Maximize

    Thus,

    Verify thatf(x) is concave (see Appendix 2 of Hilliers

    book).

    Suppose thatx = (0, 0) is initial trial solution. Thus,

    368

    2 2

    1 2 2 1 2( ) 2 2 2 .f x x x x x x

    2 1

    1

    2 2f

    x xx

    1 2

    2

    2 2 4f

    x xx

    (0, 0) (0, 2)f

    Example (2)

    Iteration 1:Step 1 sets

    by substituting these expressions into f(x):

    Because

    369

    1

    2

    0 (0) 0

    0 (2) 2

    x t

    x t t

    2 2

    2

    ( ( )) (0,2 )

    2(0)(2 ) 2(2 ) 0 2(2 )

    4 8

    f t f f t

    t t t

    t t

    x x

    * 2

    0 0 (0, 2 ) max (0, 2 ) max {4 8 }

    t tf t f t f t t

  • 7/24/2019 OD Nonlinear Programming 2010

    21/39

    21

    Example (3)

    and

    it follows that

    so

    This completes first iteration. For new trial gradient is:

    370

    24 8 4 16 0d

    t t tdt

    * 1

    4t

    1 1Reset (0,0) (0,2) 0,

    4 2

    x

    10, (1,0)2

    f

    Example (4)

    As < 1,Iteration 2:

    so

    371

    1 10, (1,0) ,

    2 2t t

    x

    2

    2

    2

    1 1 ( ( )) 0 , 0 ,

    2 21 1 1

    (2 ) 2 22 2 2

    1

    2

    f t f f t t f t

    t t

    t t

    x x

    * 2

    0 0 (0,2 ) max (0, 2 ) max {4 8 }

    t tf t f t f t t

  • 7/24/2019 OD Nonlinear Programming 2010

    22/39

    22

    Example (5)

    Because

    and

    then

    so

    This completes second iteration. See figure.

    372

    * 2

    0 0

    1 1 1 , max , max

    2 2 2t tf t f t t t

    2 1 1 2 02

    dt t t

    dt

    * 1

    2t

    1 1 1 1Reset 0, (1,0) ,

    2 2 2 2

    x

    Illustration of example

    Optimal solution is (1, 1), as f(1, 1) = (0, 0)

    373

  • 7/24/2019 OD Nonlinear Programming 2010

    23/39

    23

    Newtons method

    It is aquadratic approximat ionof objective function

    f(x).

    When objective function isconcaveand its gradient

    f(x) are written ascolumn vectors,

    The solutionx that maximizes the approximating

    quadratic function is:

    where 2f(x) is then nHessian matrix.

    374

    2 1[ ( )] ( ),f f x x x x

    Newtons method

    Theinverseof the Hessian matrix is commonly

    approximated in various ways.

    These approximations are referred to as quasi-Newton

    methods (orvariable metric methods).

    Recall that this topic was mentioned in the discipline

    Intelligent Systems, e.g. in neural network learning.

    375

  • 7/24/2019 OD Nonlinear Programming 2010

    24/39

    24

    Conditions for optimality

    376

    Problem

    Necessary conditions

    for optimality

    Also sufficient if:

    One-variableunconstrained

    f(x) concave

    Multivariableunconstrained

    f(x) concave

    Constrained, nonnegativeconstraints only

    f(x) concave

    General constrainedproblem

    Karush-Kuhn-Tucker

    conditions

    f(x) concave and gi(x)

    convex (j = 1, 2,..., n)

    Karush-Kuhn-Tucker conditions

    Theorem: Assume thatf(x),g1(x),g2(x),...,gm(x) aredifferentiablefunctions satisfying regularityconditions. Then

    x = (x1*,x1

    *,...,xn*)

    can be anoptimal solut ionfor the NP problem if

    there arem numbersu1,u2,...,um such thatalltheKKT condit ionsare satisfied:

    1.

    2.

    377

    1*

    *

    1

    0

    at , for 1, 2 , .

    0

    mi

    i

    ij j

    mi

    j i

    ij j

    gfu

    x xj n

    gfx u

    x x

    x x

  • 7/24/2019 OD Nonlinear Programming 2010

    25/39

    25

    Karush-Kuhn-Tucker conditions

    3.

    4.

    5.

    6.

    Conditions 2. and 4. require that one of the two

    quantities must be zero.

    Thus, conditions 3. and 4. can be combined:

    (3,4)

    378

    *

    *

    ( ) 0for 1,2, , .

    [ ( ) ] 0

    i i

    i i i

    g bj m

    u g b

    x

    x

    *0, for 1, 2, , .jx j n

    0, for 1,2, , .i

    u j m

    *( ) 0(or 0, if 0), for 1, 2, , .

    i i

    i

    g bu j m

    x

    Karush-Kuhn-Tucker conditions

    Similarly, conditions 1. and 2. can be combined:

    (1,2)

    Corollary: assume thatf(x) isconcaveandg1(x),g2(x),...,gm(x) areconvexfunctions, where all

    functions satisfy the regularity conditions. Then,

    x = (x1*,x1

    *,...,xn*) is anoptimal solutionif and only if

    all the conditions of the theorem are satisfied.

    379

    1*

    0

    (or 0 if 0), for 1,2 , .

    mi

    i

    ij j

    j

    gfu

    x x

    x j n

  • 7/24/2019 OD Nonlinear Programming 2010

    26/39

    26

    Example

    Thus,m = 1, andg1(x) = 2x1 +x2 is convex.

    Further, f(x) is concave (check it!). Thus, the KKT conditions gives conditions to find an

    optimal solution.

    380

    1 2Maximize ( ) ln( 1)f x x x

    1 20, 0x x

    subject to

    1 22 3x x and

    Example: KKT conditions

    1. (j= 1)

    2. (j= 1)

    1. (j= 2)

    2. (j= 2)

    3.

    4.

    5.

    6.

    381

    1

    1

    12 0

    1u

    x

    1 1

    1

    12 0

    1x u

    x

    11 0u

    2 11 0x u

    1 22 3 0x x

    1 1 2(2 3) 0u x x

    1 20, 0x x

    1 0u

  • 7/24/2019 OD Nonlinear Programming 2010

    27/39

    27

    Example: solving KKT conditions

    From condition 1 (j= 2)u11.x10 from condition 5.

    Therefore,

    Therefore,x1 = 0, from condition 2 (j= 1).

    u10 implies that 2x1 +x2 3 = 0 from condition 4.

    Two previous steps implies thatx2 = 3.

    x20 implies thatu1 = 1 from condition 2 (j= 2).

    No conditions are violated forx1 = 0,x2 = 3,u1 = 1. Consequently x* = (0,3).

    382

    1

    1

    12 0.

    1u

    x

    Quadratic Programming

    Objective function can be expressed as:

    383

    1Maximize ( )

    2

    Tf x cx x Qx

    subject to

    , and Ax b x 0

    1 1 1

    1 1( )

    2 2

    n n nT

    j j ij i j

    j i j

    f c x q x x

    x cx x Qx

  • 7/24/2019 OD Nonlinear Programming 2010

    28/39

    28

    Example

    In this case,

    384

    2 2

    1 2 1 2 1 2Maximize ( ) 15 30 4 2 4f x x x x x x x

    subject to

    1 2 1 22 30, and 0, 0x x x x

    [15 30]c

    [1 2]A

    1

    2

    x

    x

    x

    [30]b

    4 4

    4 8

    Q

    Solving QP problems

    Some KKT conditions for quadratic programming

    problems can be transformed in equality constraints

    by introducing slack variables.

    KKT conditions can be condensed due to the

    complementary variables, introducing

    complementary constraints.

    As this, except for the complementary constraints, all

    KKT conditions are linear programming constraints.

    385

  • 7/24/2019 OD Nonlinear Programming 2010

    29/39

    29

    Solving QP problems

    Using the previous properties, QP problems can be

    solved using a modified simplex method.

    See example of a QP problem in Hilliers book (pages

    580-581).

    Excel, LINGO, LINDO, Matlab and MPL/CPLEX all can

    solve quadratic programming problems.

    386

    Separable Programming

    Assumed thatf(x) is concave andgi(x) are convex.

    f(x) is a (concave)piecewise linear function (see

    example). Ifgi(x) are linear, this problem can be reformulated as

    an LP problem by using a separate variable for each

    line segment.

    The same technique can be used for nonlineargi(x).

    387

    1

    ( ) ( )n

    j j

    j

    f f x

    x

  • 7/24/2019 OD Nonlinear Programming 2010

    30/39

    30

    Example

    388

    Example

    389

  • 7/24/2019 OD Nonlinear Programming 2010

    31/39

    31

    Convex Programming

    Many algorithms can be used, falling into 3 categories:

    1. Gradient algorithms, where the gradient searchprocedure is modified to avoid violating a constraint.

    Example: generalized reduced gradient(GRG).

    2. Sequential unconstrained algorithms, include penaltyfunction and barrier function methods.

    Example: sequential unconstrained minimizationtechnique (SUMT).

    3. Sequential approximation algorithms, include linearand quadratic approximation methods.

    Example: Frank-Wolfe algorithm.

    390

    Frank-Wolfe algorithm

    It is a sequential linear approximation algorithm.

    It replaces the objective function f(x) by the first-order

    Taylor expansion off(x) aroundx =x, namely:

    Asf(x) andf(x)x have fixed values, they can be

    dropped to give a linear objective function:

    391

    1

    ( )( ) ( ) ( ) ( ) ( )( )

    n

    j j

    j j

    ff f x x f f

    x

    xx x x x x x

    1

    ( ) ( ) ( ) , where at .

    n

    j j j

    j j

    fg f c x c

    x

    xx x x x x

  • 7/24/2019 OD Nonlinear Programming 2010

    32/39

    32

    Frank-Wolfe algorithm

    Simplex method is applied to find a solution xLP.

    Then, chose the point that maximizes the nonlinear

    objective function along the line segment.

    This can be done using an one-variable unconstrained

    optimization algorithm.

    The algorithm continues the iterations until the stop

    condition is satisfied.

    392

    Summary of Frank-Wolfe algorithm

    Initializat ion:Find feasible initial trial solutionx(0), e.g.

    using LP to find initial BF solution, Setk= 1.

    Iteration k:

    1. Forj = 1, 2, ...,n, evaluate

    and setcj

    equal to this value.

    2. Find optimal solution by solving LP problem:

    393

    ( )

    LP

    kx

    ( 1)( ) at .k

    j

    f

    x

    xx x

    1

    Maximize ( ) ,n

    j j

    j

    g c x

    xsubject to

    and Ax b x 0

  • 7/24/2019 OD Nonlinear Programming 2010

    33/39

    33

    Summary of Frank-Wolfe algorithm

    3. For the variablet [0,1], set

    so thath(t) gives value off(x) on line segment

    between (wheret= 0) and (wheret= 1).

    Use one-variable unconstrained optimization to

    maximize h(t) to findx(k).

    Stopping rule:Ifx(k1) andx(k) are sufficiently close

    stop.x(k)

    is the estimate of optimal solution.Otherwise, resetk=k + 1.

    394

    ( 1) ( ) ( 1)

    LP LP( ) ( ) for ( ),k k kh t f t x x x x x

    ( 1)kx

    ( )

    LP

    kx

    Example

    As

    the unconstrained maximum x = (2.5, 2) violates the

    functional constraint.

    395

    2 2

    1 1 2 2Maximize ( ) 5 8 2f x x x x x

    subject to

    1 2 1 23 2 6, and 0, 0x x x x

    1 2

    1 2

    5 2 , 8 4f fx xx x

  • 7/24/2019 OD Nonlinear Programming 2010

    34/39

    34

    Example (2)

    Iteration 1:x = (0, 0) is feasible (initial trial x(0)).

    Step 1 givesc1 = 5 andc2 = 8, sog(x) = 5x1 + 8x2.

    Step 2: solving graphically yields = (0, 3).

    Step 3: points between (0, 0) and (0, 3) are:

    This expression gives

    396

    (1)

    LPx

    1 2( , ) (0,0) [(0,3) (0,0)] for [0,1]

    (0,3 )

    x x t t

    t

    2

    2

    ( ) (0,3 ) 8(3 ) 2(3 )

    24 18

    h t f t t t

    t t

    Example (3)

    the valuet=t* that maximizesh(t) is given by

    sot* = 2/3. This results leads to the next trial solution,

    (see figure):

    Iteration 2:following the same procedure leads to the

    next trial solutionx(2) =(5/6, 7/6).

    397

    ( )24 36 0

    dh tt

    dt

    (1) 2(0,0) [(0,3) (0,0)]3

    (0,2)

    x

  • 7/24/2019 OD Nonlinear Programming 2010

    35/39

    35

    Example (4)

    398

    Example (5)

    Figure shows next iterations.

    Note that trial solutions alternate between two

    trajectories that intersect at pointx = (1, 1.5).

    This is the optimal solution (satisfy KKT conditions).

    Usingquadraticinstead oflinearapproximations lead

    to a much faster convergence.

    399

  • 7/24/2019 OD Nonlinear Programming 2010

    36/39

    36

    Sequential unconstrained minimization

    Main versions ofSUMT:

    exterior-pointalgorithm: deals withinfeasiblesolutions

    and apenaltyfunction,

    interior-pointalgorithm: deals withfeasiblesolutions

    and abarrierfunction.

    Uses the advantage of solving unconstrained

    problems, which are much easier to solve.

    Each unconstrained problem in the sequence chooses

    a smaller and smaller value ofr, and solves forx to

    400

    Maximize ( ; ) ( ) ( )P r f rB x x x

    SUMT

    B(x) is a barrier function with following properties:

    1. B(x) issmal lwhenx isfarfrom boundary of feasible

    region.

    2. B(x) islargewhenx isclosefrom boundary of feasible

    region.

    3. B(x) as distance from boundary of feasible region

    0.

    Most common choice ofB(x):

    401

    1 1

    1 1( )

    ( )

    m n

    i ji i j

    Bb g x

    xx

  • 7/24/2019 OD Nonlinear Programming 2010

    37/39

    37

    Summary of SUMT

    Initializat ion:Find feasible initial trial solutionx(0), not

    on the boundary of feasible region. Setk= 1. Choose

    value forrand< 1 (e.g.r= 1 and= 0.01).

    Iteration k:starting fromx(k1), apply a multivariable

    unconstrained optimization procedure (e.g. gradient

    search procedure) to find local maximum x(k) of

    402

    1 1

    1 1( ; ) ( )

    ( )

    m n

    i ji i j

    P r f r b g x

    x x

    x

    Summary of SUMT

    Stopping rule:If change fromx(k1) tox(k) is very small

    stop and uses x(k) aslocal maximum. Otherwise, set

    k=k+ 1 andr= rfor other iteration.

    SUMT can be extended for equality constraints.

    Note that SUMT is quite sensitive tonumerical

    instability, so it should be applied cautiously.

    403

  • 7/24/2019 OD Nonlinear Programming 2010

    38/39

    38

    Example

    is convex, but is not concave.

    Initialization: (x1,x2) =x(0) = (1, 1),r = 1 and= 0.01.

    For each iteration:

    404

    1 2Maximize ( )f x xx

    subject to2

    1 2 1 23, and 0, 0x x x x

    2

    1 1 2( )g x x x 1 2( )f x xx

    1 2 2

    1 2 1 2

    1 1 1( ; )

    3P r x x r

    x x x x

    x

    Example (2)

    Forr= 1, maximization leads tox(1) = (0.90, 1.36).

    Table below shows convergence to (1, 2).

    405

    k r x

    1

    k

    )

    x

    2

    k

    )

    0 1 1

    1 1 0.90 1.36

    2 102 0.987 1.925

    3 104 0.998 1.993

    1 2

  • 7/24/2019 OD Nonlinear Programming 2010

    39/39

    Nonconvex Programming

    Assumptions of convex programming often fail.

    Nonconvex programming problems can be much more

    difficult to solve.

    Dealing with non differentiable and non continuous

    objective functions is usually very complicated.

    LINDO, LINGO and MPL have efficient algorithms to

    deal with these problems.

    Simple problems can be solved using hill-climbing tofind alocal maximumseveral times.

    406

    Nonconvex Programming

    An example is given in Hilliers book using Excel Solver

    to solve simple problems.

    More difficult problems can use Evolutionary Solver.

    It uses metaheuristics based on genetics, evolution

    and survival of the fittest: a genetic algorithm.

    We presented several well known metaheuristics to

    solve this type of problems.

    407