M2L4slides

Embed Size (px)

Citation preview

  • 7/29/2019 M2L4slides

    1/19

    D Nagesh Kumar, IISc Optimization Methods: M2L41

    Optimization using Calculus

    Optimization of Functions of

    Multiple Variables subject to

    Equality Constraints

  • 7/29/2019 M2L4slides

    2/19

    D Nagesh Kumar, IISc Optimization Methods: M2L42

    Objectives

    Optimization of functions of multiple variables

    subjected to equality constraints using

    the method of constrained variation, and

    the method of Lagrange multipliers.

  • 7/29/2019 M2L4slides

    3/19

    D Nagesh Kumar, IISc Optimization Methods: M2L43

    Constrained optimization

    A function of multiple variables,f(x), is to be optimized subject to one or

    more equality constraints of many variables. These equality constraints,

    gj(x), may or may not be linear. The problem statement is as follows:

    Maximize (or minimize)f(X), subject to gj(X) = 0, j = 1, 2, , mwhere

    1

    2

    n

    x

    x

    x

    =

    XM

  • 7/29/2019 M2L4slides

    4/19D Nagesh Kumar, IISc Optimization Methods: M2L44

    Constrained optimization (contd.)

    With the condition that ; or else ifm > n then the problem

    becomes an over defined one and there will be no solution. Of the

    many available methods, the method of constrained variation and the

    method of using Lagrange multipliers are discussed.

    m n

  • 7/29/2019 M2L4slides

    5/19

    D Nagesh Kumar, IISc Optimization Methods: M2L45

    Solution by method of ConstrainedVariation

    z For the optimization problem defined above, let us consider a specificcase with n = 2 and m = 1 before we proceed to find the necessary andsufficient conditions for a general problem using Lagrangemultipliers. The problem statement is as follows:

    Minimizef(x1,x

    2), subject to g(x

    1,x

    2) = 0

    z

    Forf(x1,x2) to have a minimum at a point X*

    = [x1*

    ,x2*

    ], a necessarycondition is that the total derivative off(x1,x

    2) must be zero at [x

    1*,x

    2*].

    (1)1 2

    1 2

    0f f

    df dx dx

    x x

    = + =

  • 7/29/2019 M2L4slides

    6/19

    D Nagesh Kumar, IISc Optimization Methods: M2L46

    Method of Constrained Variation (contd.)

    z Since g(x1*,x

    2*) = 0 at the minimum point, variations dx

    1and dx

    2about

    the point [x1

    *, x2

    *] must be admissible variations, i.e. the point lies on

    the constraint:

    g(x1*

    + dx1 , x2*

    + dx2) = 0 (2)assuming dx

    1and dx

    2are small the Taylor series expansion of this

    gives us

    (3)

    *

    1 1 2 2

    * * * * * *

    1 2 1 2 1 1 2 2

    1 2

    ( * , )

    ( , ) (x ,x ) (x ,x ) 0

    g x dx x dx

    g gg x x dx dxx x

    + +

    = + + =

  • 7/29/2019 M2L4slides

    7/19

    D Nagesh Kumar, IISc Optimization Methods: M2L47

    Method of Constrained Variation (contd.)

    or at [x1

    *,x2

    *] (4)

    which is the condition that must be satisfied for all admissible

    variations.

    Assuming , (4) can be rewritten as

    (5)

    1 2

    1 2

    0g g

    dg dx dxx x

    = + =

    2/ 0g x

    * *12 1 2 1

    2

    /( , )

    /

    g xdx x x dx

    g x

    =

  • 7/29/2019 M2L4slides

    8/19

    D Nagesh Kumar, IISc Optimization Methods: M2L48

    Method of Constrained Variation (contd.)

    (5) indicates that once variation along x1 (dx1) is chosen arbitrarily, the variation

    along x2 (dx2) is decided automatically to satisfy the condition for the admissible

    variation. Substituting equation (5) in (1) we have:

    (6)

    The equation on the left hand side is called the constrained variation off. Equation

    (5) has to be satisfied for all dx1, hence we have

    (7)

    This gives us the necessary condition to have [x1*, x2

    *] as an extreme point

    (maximum or minimum)

    * *1 2

    1 1

    1 2 2 (x , x )

    /0/

    f g x fdf dx

    x g x x

    = =

    * *1 2

    1 2 2 1 (x , x )

    0f g f g

    x x x x

    =

  • 7/29/2019 M2L4slides

    9/19

    D Nagesh Kumar, IISc Optimization Methods: M2L49

    Solution by method of Lagrangemultipliers

    Continuing with the same specific case of the optimization problem

    with n = 2 and m = 1 we define a quantity , called the Lagrange

    multiplieras

    (8)

    Using this in (5)

    (9)

    And (8) written as

    (10)

    * *1 2

    2

    2 (x , x )

    /

    /

    f x

    g x

    =

    * *1 2

    1 1 (x , x )

    0f g

    x x

    + =

    * *1 2

    2 2 (x , x )

    0f g

    x x

    + =

  • 7/29/2019 M2L4slides

    10/19

    D Nagesh Kumar, IISc Optimization Methods: M2L410

    Solution by method of Lagrangemultiplierscontd.

    Also, the constraint equation has to be satisfied at the extreme point

    (11)

    Hence equations (9) to (11) represent the necessary conditions for thepoint [x

    1*, x2*] to be an extreme point.

    Note that could be expressed in terms of as well andhas to be non-zero.

    Thus, these necessary conditions require that at least one of the partial

    derivatives ofg(x1 , x2) be non-zero at an extreme point.

    * *1 2

    1 2 ( , )( , ) 0

    x xg x x =

    1/g x 1/g x

  • 7/29/2019 M2L4slides

    11/19

    D Nagesh Kumar, IISc Optimization Methods: M2L411

    Solution by method of Lagrangemultiplierscontd.

    The conditions given by equations (9) to (11) can also be generated by

    constructing a functions L, known as the Lagrangian function, as

    (12)

    Alternatively, treating L as a function of x1,x

    2and , the necessary

    conditions for its extremum are given by

    (13)

    1 2 1 2 1 2( , , ) ( , ) ( , )L x x f x x g x x = +

    1 2 1 2 1 21 1 1

    1 2 1 2 1 2

    2 2 2

    ( , , ) ( , ) ( , ) 0

    ( , , ) ( , ) ( , ) 0

    ( , , ) ( , ) 0

    L f gx x x x x x

    x x x

    L f gx x x x x x

    x x x

    Lx x g x x

    1 2 1 2

    = + =

    = + =

    = =

  • 7/29/2019 M2L4slides

    12/19

    D Nagesh Kumar, IISc Optimization Methods: M2L412

    Necessary conditions for a generalproblem

    For a general problem with n variables and m equality constraints theproblem is defined as shown earlier

    Maximize (or minimize)f(X), subject to gj(X) = 0, j = 1, 2, , m

    where

    In this case the Lagrange function, L, will have

    one Lagrange multiplier j for each constraintas

    (14)1 2 , 1 2 1 1 2 2( , ,..., , ,..., ) ( ) ( ) ( ) ... ( )n m m mL x x x f g g g = + + + +X X X X

    1

    2

    n

    x

    x

    x

    =

    XM

  • 7/29/2019 M2L4slides

    13/19

    D Nagesh Kumar, IISc Optimization Methods: M2L413

    Necessary conditions for a generalproblemcontd.

    L is now a function ofn + m unknowns, , and the

    necessary conditions for the problem defined above are given by

    (15)

    which represent n + m equations in terms of the n + m unknowns,xiand

    j.

    The solution to this set of equations gives us

    (16)

    and

    1 2 , 1 2, ,..., , ,...,n mx x x

    1

    ( ) ( ) 0, 1, 2,..., 1, 2,...,

    ( ) 0, 1, 2,...,

    mj

    j

    ji i i

    j

    j

    gL fi n j m

    x x xL

    g j m

    =

    = + = = =

    = = =

    X X

    X

    *

    1

    *

    2

    *

    n

    x

    x

    x

    =

    X

    M

    *

    1

    *

    * 2

    *

    m

    =

    M

  • 7/29/2019 M2L4slides

    14/19

    D Nagesh Kumar, IISc Optimization Methods: M2L414

    Sufficient conditions for a general problem

    A sufficient condition forf(X) to have a relative minimum at X* is that each

    root of the polynomial in , defined by the following determinant equation

    be positive.

    (17)

    11 12 1 11 21 1

    21 22 2 12 22 2

    1 2 1 2

    11 12 1

    21 22 2

    1 2

    00 0

    0 0

    n m

    n m

    n n nn n n mn

    n

    n

    m m mn

    L L L g g g

    L L L g g g

    L L L g g g

    g g g

    g g g

    g g g

    =

    L L

    M O M M O M

    L L

    L L L

    M O M

    M O M M M

    L L L

  • 7/29/2019 M2L4slides

    15/19

    D Nagesh Kumar, IISc Optimization Methods: M2L415

    Sufficient conditions for a generalproblemcontd.

    where

    (18)

    Similarly, a sufficient condition forf(X) to have a relative maximum at X*

    is that each root of the polynomial in , defined by equation (17) benegative.

    If equation (17), on solving yields roots, some of which are positive andothers negative, then the point X* is neither a maximum nor a minimum.

    2* *

    *

    ( , ), for 1,2,..., 1,2,...,

    ( ), where 1,2,..., and 1,2,...,

    ij

    i j

    p

    pq

    q

    LL i n and j m

    x x

    g

    g p m qx

    = = =

    = = =

    X

    X n

  • 7/29/2019 M2L4slides

    16/19

    D Nagesh Kumar, IISc Optimization Methods: M2L416

    Example

    Minimize , Subject to

    or

    2 21 1 2 2 1 2( ) 3 6 5 7 5f x x x x x x= + +X 1 2 5x x+ =

  • 7/29/2019 M2L4slides

    17/19

    1 2 1

    2

    1 2 1

    1 2 2 1

    6 10 5 0

    13 5 (5 )2

    13( ) 2 (5 )

    2

    x xx

    x x

    x x x

    = + + =

    => + = +

    => + + = +

    L

    2 12

    x =

    1

    11

    2x =

    [ ]1 11

    * , ; * 23

    2 2

    = =

    X 11 12 11

    21 22 21

    11 12

    0

    0

    L L g

    L L g

    g g

    =

  • 7/29/2019 M2L4slides

    18/19

  • 7/29/2019 M2L4slides

    19/19

    D Nagesh Kumar, IISc Optimization Methods: M2L419

    Thank you