Abdelkader BENHARI Optimisation Notes.pdf

Embed Size (px)

Citation preview

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    1/252

    Abdelkader BENHARI

    Optimization methods

    Introduction and Basic Concepts of optimization problems,

    Optimization using calculus, Kuhn-Tucker Conditions; Linear Programming - Graphical method,

    Simplex method, Revised simplex method, Sensitivity analysis, Examples of transportation, assignment,Dynamic Programming - 

    Introduction, Sequential optimization, computational procedure, curse of dimensionality, Applications in water resources and structural

    engineering; Other topics in Optimization - Piecewise linear approximation, Multi objective optimization, Multi level optimization

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    2/252

    i

    Contents

    Introduction and Basic Concepts 1

    Historical Development and Model Building 1

    Optimization Problem and Model Formulation 6

    Classification of Optimization Problems 16

    Classical and Advanced Techniques for Optimization 23

    Optimization using Calculus-Stationary Points 27

    Stationary points: Functions of Single and Two Variables 27

    Convexity and Concavity of Functions of One and Two Variables 40

    Optimization using Calculus - Unconstrained Optimization 48

    Optimization of Functions of Multiple Variables: Unconstrained Optimization 48

    Optimization using Calculus –  Kuhn-Tucker Conditions 57

    Linear Programming- Preliminaries 63

    Linear Programming- Graphical Method 70

    Linear Programming- Simplex Method-I 76

    Linear Programming- Simplex Method – 

     II 87

    Revised Simplex Method, Duality and Sensitivity analysis 95

    Linear Programming - Other Algorithms for Solving Linear Programming Problems 106

    113Linear Programming Applications –  Software

    MATLAB Toolbox for Linear Programming  113

    Linear Programming Applications  –  Transportation Problem 116 

    Transportation Problem 116 

    Linear Programming -Assignment Problem 125 

    Linear Programming Applications  –  Structural & Water Resources Problems 134 

    Dynamic Programming  –  Introduction 144 

    Dynamic Programming  –  Recursive Equations 150 

    Computational Procedure in Dynamic Programming 154 

    Dynamic Programming – 

     Other Topics 158 

     A.BENHARI

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    3/252

    ii

     Dynamic Programming Applications –  Design of Continuous Beam 161 

    Dynamic Programming Applications –  Optimum Geometric Layout of Truss 163 

    Optimization Methods: Dynamic Programming Applications  –  Water Allocation 165 

    Water Allocation as a Sequential Process – 

     Recursive Equations 165 

    Water Allocation as a Sequential Process  –  Numerical Example 170 

    Dynamic Programming Applications –  Capacity Expansion 177 

    Dynamic Programming Applications –  Reservoir Operation 189 

    Integer Programming  –  Integer Linear Programming 197 

    Integer Programming  –  Mixed Integer Programming 204 

    Integer Programming – 

     Examples 207 

    Advanced Topics in Optimization 215 

    Piecewise linear approximation of a nonlinear function 215 

    Advanced Topics in Optimization - Multi-objective Optimization 222 

    Advanced Topics in Optimization - Multilevel Optimization 230 

    Advanced Topics in Optimization - Direct and Indirect Search Methods 233 

    Advanced Topics in Optimization - Evolutionary Algorithms for Optimization 239 

    Advanced Topics in Optimization - Applications in Civil Engineering 246 

    References 249 

     A.BENHARI

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    4/252

    Optimization Methods: Introduction and Basic Concepts

    Historical Development and Model Building

    Introduction

    In this lecture, historical development of optimization methods is glanced through. Apart

    from the major developments, some recently developed novel approaches, such as, goal

     programming for multi-objective optimization, simulated annealing, genetic algorithms, and

    neural network methods are briefly mentioned tracing their origin. Engineering applications

    of optimization with different modeling approaches are scanned through from which one

    would get a broad picture of the multitude applications of optimization techniques.

    Historical Development

    The existence of optimization methods can be traced to the days of Newton, Lagrange, and

    Cauchy. The development of differential calculus methods for optimization was possible

     because of the contributions of Newton and Leibnitz to calculus. The foundations of calculus

    of variations, which deals with the minimization of functions, were laid by Bernoulli, Euler,

    Lagrange, and Weistrass. The method of optimization for constrained problems, which

    involve the addition of unknown multipliers, became known by the name of its inventor,

    Lagrange. Cauchy made the first application of the steepest descent method to solve

    unconstrained optimization problems. By the middle of the twentieth century, the high-speed

    digital computers made implementation of the complex optimization procedures possible and

    stimulated further research on newer methods. Spectacular advances followed, producing a

    massive literature on optimization techniques. This advancement also resulted in the

    emergence of several well defined new areas in optimization theory.

    Some of the major developments in the area of numerical methods of unconstrained

    optimization are outlined here with a few milestones.

    • Development of the simplex method by Dantzig in 1947 for linear programming

     problems

    • The enunciation of the principle of optimality in 1957 by Bellman for dynamic

     programming problems,

     A.BENHARI 1 A.BENHARI 1 A.BENHARI 1 A.BENHARI 1

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    5/252

    Optimization Methods: Introduction and Basic Concepts

    • Work by Kuhn and Tucker in 1951 on the necessary and sufficient conditions for the

    optimal solution of programming problems laid the foundation for later research in

    non-linear programming.

    The contributions of Zoutendijk and Rosen to nonlinear programming during the early1960s have been very significant.

    • Work of Carroll and Fiacco and McCormick facilitated many difficult problems to be

    solved by using the well-known techniques of unconstrained optimization.

    • Geometric programming was developed in the 1960s by Duffin, Zener, and Peterson.

    • Gomory did pioneering work in integer programming, one of the most exciting and 

    rapidly developing areas of optimization. The reason for this is that most real world 

    applications fall under this category of problems.

    • Dantzig and Charnes and Cooper developed stochastic programming techniques and 

    solved problems by assuming design parameters to be independent and normally

    distributed.

    The necessity to optimize more than one objective or goal while satisfying the physical

    limitations led to the development of multi-objective programming methods. Goal

     programming is a well-known technique for solving specific types of multi-objective

    optimization problems. The goal programming was originally proposed for linear problems

     by Charnes and Cooper in 1961. The foundation of game theory was laid by von Neumann in

    1928 and since then the technique has been applied to solve several mathematical, economic

    and military problems. Only during the last few years has game theory been applied to solve

    engineering problems.

    Simulated annealing, genetic algorithms, and neural network methods represent a new class

    of mathematical programming techniques that have come into prominence during the last

    decade. Simulated annealing is analogous to the physical process of annealing of metals and

    glass. The genetic algorithms are search techniques based on the mechanics of natural

    selection and natural genetics. Neural network methods are based on solving the problem

    using the computing power of a network of interconnected ‘neuron’ processors.

     A.BENHARI 2 A.BENHARI 2 A.BENHARI 2 A.BENHARI 2

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    6/252

    Optimization Methods: Introduction and Basic Concepts

    Engineering applications of optimization

    To indicate the widespread scope of the subject, some typical applications in different

    engineering disciplines are given below.

    •  Design of civil engineering structures such as frames, foundations, bridges, towers,

    chimneys and dams for minimum cost.

    •  Design of minimum weight structures for earth quake, wind and other types of

    random loading.

    •  Optimal plastic design of frame structures (e.g., to determine the ultimate moment

    capacity for minimum weight of the frame).

    •  Design of water resources systems for obtaining maximum benefit.

    •  Design of optimum pipeline networks for process industry.

    •  Design of aircraft and aerospace structure for minimum weight

    •  Finding the optimal trajectories of space vehicles.

    •  Optimum design of linkages, cams, gears, machine tools, and other mechanical

    components.

    •  Selection of machining conditions in metal-cutting processes for minimizing the

     product cost.

    •  Design of material handling equipment such as conveyors, trucks and cranes for

    minimizing cost.

    •  Design of pumps, turbines and heat transfer equipment for maximum efficiency.

    •  Optimum design of electrical machinery such as motors, generators and transformers.

    •  Optimum design of electrical networks.

    •  Optimum design of control systems.

    •  Optimum design of chemical processing equipments and plants.

    •  Selection of a site for an industry.

    •  Planning of maintenance and replacement of equipment to reduce operating costs.

    •  Inventory control.

    •  Allocation of resources or services among several activities to maximize the benefit.

    •  Controlling the waiting and idle times in production lines to reduce the cost of

     production.•  Planning the best strategy to obtain maximum profit in the presence of a competitor.

     A.BENHARI 3 A.BENHARI 3 A.BENHARI 3 A.BENHARI 3

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    7/252

    Optimization Methods: Introduction and Basic Concepts

    •  Designing the shortest route to be taken by a salesper son to visit various cities in a

    single tour.

    •  Optimal production planning, controlling and scheduling.

    • 

    Analysis of statistical data and building empirical models to obtain the most accuraterepresentation of the statistical phenomenon.

    However, the list is incomplete.

    Art of Modeling: Model Building

    Development of an optimization model can be divided into five major phases.

    •  Data collection

    •  Problem definition and formulation

    •  Model development

    •  Model validation and evaluation of performance

    •  Model application and interpretation

    Data collection may be time consuming but is the fundamental basis of the model-building

     process. The availability and accuracy of data can have considerable effect on the accuracy of

    the model and on the ability to evaluate the model.

    The problem definition and formulation includes the steps: identification of the decision

    variables; formulation of the model objective(s) and the formulation of the model constraints.

    In performing these steps the following are to be considered.

    •  Identify the important elements that the problem consists of.

    •  Determine the number of independent variables, the number of equations required to

    describe the system, and the number of unknown parameters.

    •  Evaluate the structure and complexity of the model

    •  Select the degree of accuracy required of the model

    Model development  includes the mathematical description, parameter estimation, input

    development, and software development. The model development phase is an iterative

     process that may require returning to the model definition and formulation phase.

    The model validation and evaluation phase is checking the performance of the model as a

    whole. Model validation  consists of validation of the assumptions and parameters of the

     A.BENHARI 4 A.BENHARI 4 A.BENHARI 4 A.BENHARI 4

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    8/252

    Optimization Methods: Introduction and Basic Concepts

    model. The performance of the model is to be evaluated using standard performance

    measures such as Root mean squared error and R 2  value. A sensitivity analysis should be

     performed to test the model inputs and parameters. This phase also is an iterative process and

    may require returning to the model definition and formulation phase. One important aspect of

    this process is that in most cases data used in the formulation process should be different

    from that used in validation. Another point to keep in mind is that no single validation

     process is appropriate for all models.

    Model application and implementation include the use of the model in the particular area

    of the solution and the translation of the results into operating instructions issued in

    understandable form to the individuals who will administer the recommended system.

    Different modeling techniques are developed to meet the requirements of different types of

    optimization problems. Major categories of modeling approaches are: classical optimization

    techniques, linear programming, nonlinear programming, geometric programming, dynamic

     programming, integer programming, stochastic programming, evolutionary algorithms, etc.

    These modeling approaches will be discussed in subsequent modules of this course.

     A.BENHARI 5 A.BENHARI 5 A.BENHARI 5 A.BENHARI 5

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    9/252

    Optimization Methods: Introduction and Basic concepts

    Optimization Problem and Model Formulation

    Introduction

    In the previous lecture we studied the evolution of optimization methods and their

    engineering applications. A brief introduction was also given to the art of modeling. In this

    lecture we will study the Optimization problem, its various components and its formulation as

    a mathematical programming problem.

    Basic components of an optimization problem:

    An objective function expresses the main aim of the model which is either to be minimized

    or maximized. For example, in a manufacturing process, the aim may be to maximize the

     profit or minimize the cost. In comparing the data prescribed by a user-defined model with the

    observed data, the aim is minimizing the total deviation of the predictions based on the model

    from the observed data. In designing a bridge pier, the goal is to maximize the strength and

    minimize size.

    A set of unknowns or variables control the value of the objective function. In the

    manufacturing problem, the variables may include the amounts of different resources used or

    the time spent on each activity. In fitting-the-data problem, the unknowns are the  parameters

    of the model. In the pier design problem, the variables are the shape and dimensions of the

     pier.

    A set of constraints are those which allow the unknowns to take on certain values but

    exclude others. In the manufacturing problem, one cannot spend negative amount of time on

    any activity, so one constraint is that the "time" variables are to be non-negative. In the pier

    design problem, one would probably want to limit the breadth of the base and to constrain its

    size.

    The optimization problem is then to find values of the variables that minimize or maximize

    the objective function while satisfying the constraints.

    Objective Function

    As already stated, the objective function is the mathematical function one wants to maximize

    or minimize, subject to certain constraints. Many optimization problems have a single

     A.BENHARI 6 A.BENHARI 6 A.BENHARI 6 A.BENHARI 6

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    10/252

    Optimization Methods: Introduction and Basic concepts

    objective function. (When they don't they can often be reformulated so that they do) The two

    exceptions are:

    •   No objective function. In some cases (for example, design of integrated circuit

    layouts), the goal is to find a set of variables that satisfies the constraints of the model.

    The user does not particularly want to optimize anything and so there is no reason to

    define an objective function. This type of problems is usually called a  feasibility

     problem.

    •   Multiple objective functions. In some cases, the user may like to optimize a number of

    different objectives concurrently. For instance, in the optimal design of panel of a

    door or window, it would be good to minimize weight and maximize strength

    simultaneously. Usually, the different objectives are not compatible; the variables that

    optimize one objective may be far from optimal for the others. In practice, problems

    with multiple objectives are reformulated as single-objective problems by either

    forming a weighted combination of the different objectives or by treating some of the

    objectives as constraints.

    Statement of an optimization problem

    An optimization or a mathematical programming problem can be stated as follows:

    To find X = which minimizes  f (X) (1.1)

    ⎟⎟⎟⎟⎟⎟

     ⎠

     ⎞

    ⎜⎜⎜⎜⎜⎜

    ⎝ 

    ⎛ 

    n x

     x

     x

    .

    .

    2

    1

    Subject to the constraints

    gi(X) 0≤ , i = 1, 2, …., m 

    l j(X) 0= ,  j = 1, 2, …., p 

    where X is an  n-dimensional vector called the design vector,  f (X) is called the objective

     function, and gi(X) and l j(X) are known as inequality and equality constraints, respectively.

    The number of variables n  and the number of constraints m and/or  p need not be related in

    any way. This type problem is called a constrained optimization problem.

     A.BENHARI 7 A.BENHARI 7 A.BENHARI 7

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    11/252

    Optimization Methods: Introduction and Basic concepts

    If the locus of all points satisfying  f (X) = a constant c, is considered, it can form a family of

    surfaces in the design space called the objective function surfaces. When drawn with the

    constraint surfaces as shown in Fig 1 we can identify the optimum point (maxima). This is

     possible graphically only when the number of design variables is two. When we have three or

    more design variables because of complexity in the objective function surface, we have to

    solve the problem as a mathematical problem and this visualization is not possible.

    .

    Optimum point

     f = C3

     f = C2

     f= C4

     f = C5

    C1 > C2 > C3 >C4 …..> Cn

     f = C1

    Fig 1

    Optimization problems can be defined without any constraints as well.

    To find X = which minimizes  f (X) (1.2)

    ⎟⎟⎟⎟⎟⎟

     ⎠

     ⎞

    ⎜⎜⎜⎜⎜⎜

    ⎝ 

    ⎛ 

    n x

     x

     x

    .

    .

    2

    1

    Such problems are called unconstrained optimization problems.  The field of unconstrained

    optimization is quite a large and prominent one, for which a lot of algorithms and software

    are available.

     A.BENHARI 8 A.BENHARI 8 A.BENHARI 8

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    12/252

    Optimization Methods: Introduction and Basic concepts

    Variables These are essential. If there are no variables, we cannot define the objective function and the

     problem constraints. In many practical problems, one cannot choose the design variable

    arbitrarily. They have to satisfy certain specified functional and other requirements. 

    Constraints

    Constraints are not essential. It's been argued that almost all problems really do  have

    constraints. For example, any variable denoting the "number of objects" in a system can only

     be useful if it is less than the number of elementary particles in the known universe! In

     practice though, answers that make good sense in terms of the underlying physical or

    economic criteria can often be obtained without putting constraints on the variables.

     Design constraints are restrictions that must be satisfied to produce an acceptable design.

    Constraints can be broadly classified as:

    1)  Behavioral or Functional constraints: These represent limitations on the behavior

     performance of the system.

    2)  Geometric or Side constraints: These represent physical limitations on design

    variables such as availability, fabricability, and transportability.

    For example, for the retaining wall design shown in the Fig 2, the base width W cannot be

    taken smaller than a certain value due to stability requirements. The depth D below the

    ground level depends on the soil pressure coefficients K a  and K  p. Since these constraints

    depend on the performance of the retaining wall they are called behavioral constraints. The

    number of anchors provided along a cross section Ni cannot be any real number but has to be

    a whole number. Similarly thickness of reinforcement used is controlled by supplies from the

    manufacturer. Hence this is a side constraint.

     A.BENHARI 9 A.BENHARI 9 A.BENHARI 9

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    13/252

    Optimization Methods: Introduction and Basic concepts

    D

     Ni no. of anchors

    W

     

    Fig. 2

    Constraint Surfaces

    Consider the optimization problem presented in eq. 1.1 with only the inequality constraint

    gi(X) . The set of values of X that satisfy the equation g0≤ i(X) 0≤  forms a boundary surface

    in the design space called a constraint surface.  This will be a (n-1) dimensional subspace

    where n  is the number of design variables. The constraint surface divides the design space

    into two regions: one with gi(X) (feasible region) and the other in which g0< i(X) > 0

    (infeasible region). The points lying on the hyper surface will satisfy gi(X) =0. The collection

    of all the constraint surfaces gi(X) = 0, j= 1, 2, …, m, which separates the acceptable region is

    called the composite constraint surface.

    Fig 3 shows a hypothetical two-dimensional design space where the feasible region is

    denoted by hatched lines. The two-dimensional design space is bounded by straight lines as

    shown in the figure. This is the case when the constraints are linear. However, constraints

    may be nonlinear as well and the design space will be bounded by curves in that case. A

    design point that lies on more than one constraint surface is called a bound point, and the

    associated constraint is called an active constraint. Free points are those that do not lie on any

    constraint surface. The design points that lie in the acceptable or unacceptable regions can be

    classified as following:

    1.  Free and acceptable point

    2. 

    Free and unacceptable point

     A.BENHARI 10 A.BENHARI 10 A.BENHARI 10

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    14/252

    Optimization Methods: Introduction and Basic concepts

    3.  Bound and acceptable point

    4.  Bound and unacceptable point.

    Examples of each case are shown in Fig. 3.

    Fig. 3

    Boundunacceptable

     point.

    Behaviorconstraint

    g2 ≤ 0

    .

    Infeasible

    region

    Feasibleregion

    Behaviorconstraint

    g1 ≤0

    Side

    constraintg3 ≥ 0

    Bound

    acceptable point.

    .

    Free acceptable

     pointFree unacceptable

     point

    Formulation of design problems as mathematical programming problems

    In mathematics, the term optimization, or mathematical programming, refers to the study

    of problems in which one seeks to minimize or maximize a real function by systematically

    choosing the values of real or integer variables from within an allowed set. This problem can

     be represented in the following way

    Given: a function f  : A  R from some set A to the real numbers

    Sought: an element  x0 in A such that  f ( x0) ≤  f ( x) for all x in A  ("minimization") or such that

     f ( x0) ≥  f ( x) for all x in A ("maximization").

    Such a formulation is called an optimization problem or a mathematical programming

    problem (a term not directly related to computer programming, but still in use for example,

     A.BENHARI 11 A.BENHARI 11 A.BENHARI 11

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    15/252

    Optimization Methods: Introduction and Basic concepts

    in linear programming – (see module 3)). Many real-world and theoretical problems may be

    modeled in this general framework.

    Typically, A is some subset of the Euclidean space Rn, often specified by a set of constraints,

    equalities or inequalities that the members of A have to satisfy. The elements of A are called

    candidate solutions or feasible solutions. The function f  is called an objective function, or cost

     function. A feasible solution that minimizes (or maximizes, if that is the goal) the objective

    function is called an optimal solution. The domain A of f  is called the search space.

    Generally, when the feasible region or the objective function of the problem does not present

    convexity (refer module 2), there may be several local minima and maxima, where a local

    minimum x* is defined as a point for which there exists some δ > 0 so that for all x such that

    ;

    and

    that is to say, on some region around x* all the function values are greater than or equal to the

    value at that point. Local maxima are defined similarly.

    A large number of algorithms proposed for solving non-convex problems – including themajority of commercially available solvers – are not capable of making a distinction between

    local optimal solutions and rigorous optimal solutions, and will treat the former as the actual

    solutions to the original problem. The branch of applied mathematics and numerical analysis

    that is concerned with the development of deterministic algorithms that are capable of

    guaranteeing convergence in finite time to the actual optimal solution of a non-convex

     problem is called global optimization.

    Problem formulation

    Problem formulation is normally the most difficult part of the process. It is the selection of

    design variables, constraints, objective function(s), and models of the discipline/design.

    Selection of design variables

    A design variable, that takes a numeric or binary value, is controllable from the point of view

    of the designer. For instance, the thickness of a structural member can be considered a design

    variable. Design variables can be continuous (such as the length of a cantilever beam),

     A.BENHARI 12 A.BENHARI 12 A.BENHARI 12

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    16/252

    Optimization Methods: Introduction and Basic concepts

    discrete (such as the number of reinforcement bars used in a beam), or Boolean. Design

     problems with continuous variables are normally solved more easily.

    Design variables are often bounded, that is, they have maximum and minimum values.

    Depending on the adopted method, these bounds can be treated as constraints or separately.

    Selection of constraints

    A constraint is a condition that must be satisfied to render the design to be feasible. An

    example of a constraint in beam design is that the resistance offered by the beam at points of

    loading must be equal to or greater than the weight of structural member and the load

    supported. In addition to physical laws, constraints can reflect resource limitations, user

    requirements, or bounds on the validity of the analysis models. Constraints can be used

    explicitly by the solution algorithm or can be incorporated into the objective, by using

    Lagrange multipliers.

    Objectives

    An objective is a numerical value that is to be maximized or minimized. For example, a

    designer may wish to maximize profit or minimize weight. Many solution methods work only

    with single objectives. When using these methods, the designer normally weights the various

    objectives and sums them to form a single objective. Other methods allow multi-objectiveoptimization (module 8), such as the calculation of a Pareto front.

    Models

    The designer has to also choose models to relate the constraints and the objectives to the

    design variables. These models are dependent on the discipline involved. They may be

    empirical models, such as a regression analysis of aircraft prices, theoretical models, such as

    from computational fluid dynamics, or reduced-order models of either of these. In choosing

    the models the designer must trade-off fidelity with the time required for analysis.

    The multidisciplinary nature of most design problems complicates model choice and

    implementation. Often several iterations are necessary between the disciplines’ analyses in

    order to find the values of the objectives and constraints. As an example, the aerodynamic

    loads on a bridge affect the structural deformation of the supporting structure. The structural

    deformation in turn changes the shape of the bridge and hence the aerodynamic loads. Thus,

    it can be considered as a cyclic mechanism. Therefore, in analyzing a bridge, the

     A.BENHARI 13 A.BENHARI 13 A.BENHARI 13

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    17/252

    Optimization Methods: Introduction and Basic concepts

    aerodynamic and structural analyses must be run a number of times in turn until the loads and

    deformation converge.

    Representation in standard form

    Once the design variables, constraints, objectives, and the relationships between them have

     been chosen, the problem can be expressed as shown in equation 1.1

    Maximization problems can be converted to minimization problems by multiplying the

    objective by -1. Constraints can be reversed in a similar manner. Equality constraints can be

    replaced by two inequality constraints.

    Problem solution

    The problem is normally solved choosing the appropriate techniques from those available in

    the field of optimization. These include gradient-based algorithms, population-based

    algorithms, or others. Very simple problems can sometimes be expressed linearly; in that case

    the techniques of linear programming are applicable.

    Gradient-based methods

    •   Newton's method

    •  Steepest descent

    •  Conjugate gradient

    •  Sequential quadratic programming

    Population-based methods

    •  Genetic algorithms

    •  Particle swarm optimization

    Other methods

    •  Random search

    •  Grid search

    •  Simulated annealing

    Most of these techniques require large number of evaluations of the objectives and the

    constraints. The disciplinary models are often very complex and can take significant amount

    of time for a single evaluation. The solution can therefore be extremely time-consuming.

     A.BENHARI 14 A.BENHARI 14 A.BENHARI 14

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    18/252

    Optimization Methods: Introduction and Basic concepts

    Many of the optimization techniques are adaptable to parallel computing. Much of the current

    research is focused on methods of decreasing the computation time.

    The following steps summarize the general procedure used to formulate and solve

    optimization problems. Some problems may not require that the engineer follow the steps in

    the exact order, but each of the steps should be considered in the process.

    1)  Analyze the process itself to identify the process variables and specific characteristics

    of interest, i.e., make a list of all the variables.

    2)  Determine the criterion for optimization and specify the objective function in terms of

    the above variables together with coefficients.

    3)  Develop via mathematical expressions a valid process model that relates the input-

    output variables of the process and associated coefficients. Include both equality and

    inequality constraints. Use well known physical principles such as mass balances,

    energy balance, empirical relations, implicit concepts and external restrictions.

    Identify the independent and dependent variables to get the number of degrees of

    freedom.

    4)  If the problem formulation is too large in scope:

     

     break it up into manageable parts, or

      simplify the objective function and the model

    5)  Apply a suitable optimization technique for mathematical statement of the problem.

    6)  Examine the sensitivity of the result, to changes in the values of the parameters in the

     problem and the assumptions.

     A.BENHARI 15 A.BENHARI 15 A.BENHARI 15

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    19/252

    Optimization Methods: Introduction and Basic Concepts

    Classification of Optimization Problems

    Introduction

    In the previous lecture we studied the basics of an optimization problem and its formulation

    as a mathematical programming problem. In this lecture we look at the various criteria for

    classification of optimization problems.

    Optimization problems can be classified based on the type of constraints, nature of design

    variables, physical structure of the problem, nature of the equations involved, deterministic

    nature of the variables, permissible value of the design variables, separability of the functions

    and number of objective functions. These classifications are briefly discussed below.

    Classification based on existence of constraints.

    Under this category optimizations problems can be classified into two groups as follows:

    Constrained optimization problems: which are subject to one or more constraints.

    Unconstrained optimization problems: in which no constraints exist.

    Classification based on the nature of the design variables.

    There are two broad categories in this classification.

    (i) In the first category the objective is to find a set of design parameters that makes a

     prescribed function of these parameters minimum or maximum subject to certain constraints.

    For example to find the minimum weight design of a strip footing with two loads shown in

    Fig 1 (a) subject to a limitation on the maximum settlement of the structure can be stated as

    follows.

    Find X = which minimizes⎭⎬⎫

    ⎩⎨⎧

    b

     f (X) = h(b,d )

    Subject to the constraints (sδ   X ) maxδ  ≤ ; b  ≥  0 ; d   ≥  0

    where sδ     is the settlement of the footing. Such problems are called  parameter   or static

    optimization problems.

     A.BENHARI 16 A.BENHARI 16 A.BENHARI 16 A.BENHARI 16

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    20/252

    Optimization Methods: Introduction and Basic Concepts

    It may be noted that, for this particular example, the length of the footing (l), the loads P1 and

    P2  and the distance between the loads are assumed to be constant and the required

    optimization is achieved by varying b and d.

    (ii) In the second category of problems, the objective is to find a set of design parameters,

    which are all continuous functions of some other parameter that minimizes an objective

    function subject to a set of constraints. If the cross sectional dimensions of the rectangular

    footings are allowed to vary along its length as shown in Fig 1 (b), the optimization problem

    can be stated as :

    Find X(t)  = which minimizes⎭⎬⎫

    ⎩⎨⎧

    )(

    )(

    t d 

    t b

     f (X) = g( b(t), d(t) )

    Subject to the constraints

    (sδ   X(t) ) maxδ  ≤   0 ≤  t ≤  l 

    b(t)  0 0≥ ≤  t ≤  l 

    d(t)  0 0≥ ≤  t ≤  l

    The length of the footing (l) the loads P1 and P2 , the distance between the loads are assumed

    to be constant and the required optimization is achieved by varying b and d along the length l.

    Here the design variables are functions of the length parameter t. this type of problem, where

    each design variable is a function of one or more parameters, is known as trajectory or

    dynamic optimization problem.

    l l

    P1

    P2

    d

     b

    P2

    P1

     b(t)

    d(t)

     

    (a) (b)

    Fig 1

     A.BENHARI 17 A.BENHARI 17 A.BENHARI 17

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    21/252

    Optimization Methods: Introduction and Basic Concepts

    Classification based on the physical structure of the problem

    Based on the physical structure, optimization problems are classified as optimal control and

    non-optimal control problems.

    (i) Optimal control problems

    An optimal control (OC) problem is a mathematical programming problem involving a

    number of stages, where each stage evolves from the preceding stage in a prescribed manner.

    It is defined by two types of variables: the control or design and state variables. The control

    variables  define the system and controls how one stage evolves into the next. The  state

    variables describe the behavior or status of the system at any stage. The problem is to find a

    set of control variables such that the total objective function (also known as the performance

    index, PI) over all stages is minimized, subject to a set of constraints on the control and state

    variables. An OC problem can be stated as follows:

    Find X which minimizes f (X) = ),(1

    ii

    l

    i

    i  y x f ∑=

    Subject to the constraints

    1),( +=+ iiiii  y y y xq   i = 1, 2, …., l 

    0)(   ≤ j j  xg ,  j = 1, 2, …., l 

    0)(   ≤k k   yh , k  = 1, 2, …., l

    Where xi is the ith control variable, yi is the ith state variable, and f i is the contribution of the

    ith stage to the total objective function. g j, hk , and  qi are the functions of  x j, y j ;  xk, yk and xi and  

     yi, respectively, and l is the total number of states. The control and state variables  xi and  yi 

    can be vectors in some cases.

    (ii) Problems which are not optimal control problems  are called non-optimal control

     problems.

    Classification based on the nature of the equations involved

    Based on the nature of equations for the objective function and the constraints, optimization

     problems can be classified as linear, nonlinear, geometric and quadratic programming

     problems. The classification is very useful from a computational point of view since many

     A.BENHARI 18 A.BENHARI 18 A.BENHARI 18

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    22/252

    Optimization Methods: Introduction and Basic Concepts

     predefined special methods are available for effective solution of a particular type of

     problem.

    (i) Linear programming problem

    If the objective function and all the constraints are ‘linear’ functions of the design variables,

    the optimization problem is called a linear programming problem (LPP). A linear

     programming problem is often stated in the standard form :

    Find X  =

    ⎪⎪

    ⎪⎪⎪

    ⎪⎪

    ⎪⎪⎪

    n x

     x

     x

    .

    .

    2

    1

    Which maximizes f (X) = i

    n

    i

    i xc∑=1

    Subject to the constraints

     ji

    n

    i

    ij b xa   =∑=1

    ,  j = 1, 2, . . . , m

     xi ,0≥   j = 1, 2, . . . , m 

    where ci, aij, and  b j are constants.

    (ii) Nonlinear programming problem

    If any of the functions among the objectives and constraint functions is nonlinear, the

     problem is called a nonlinear programming (NLP) problem. This is the most general form of

    a programming problem and all other problems can be considered as special cases of the NLP

     problem.

    (iii) Geometric programming problem

    A geometric programming (GMP)  problem is one in which the objective function and

    constraints are expressed as polynomials in X. A function h(X) is called a  polynomial (with

    terms) if  h can be expressed asm

    nma

    n

    mama

    m

    na

    n

    aana

    n

    aa

     x x xc x x xc x x xc X h   LLLL2

    2

    1

    1

    222

    2

    12

    12

    121

    2

    11

    11)(   +++=  

     A.BENHARI 19 A.BENHARI 19 A.BENHARI 19

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    23/252

    Optimization Methods: Introduction and Basic Concepts

    where c j  ( ) and am j ,,1L= ij  ( andni ,,1L= m j ,,1L= ) are constants with and

    .

    0≥ jc

    0≥i x

    Thus GMP  problems can be posed as follows:

    Find X which minimizes

     f (X) = c,0

    1 1

    ∑=   =

    ⎟⎟ ⎠

     ⎞⎜⎜⎝ 

    ⎛  N 

     j

    n

    i

    ija

    i j  xc C  j > 0,  xi > 0

    subject to

    gk (X) = a,01 1∑=   = >⎟⎟ ⎠ ⎞⎜⎜⎝ ⎛ k  N 

     j

    n

    i

    ijk qi jk   xa C  jk  > 0,  xi > 0, k  = 1,2,…..,m 

    where N 0 and N k  denote the number of terms in the objective function and in the k th constraint

    function, respectively.

    (iv) Quadratic programming problem

    A quadratic programming problem is the best behaved nonlinear programming problem with

    a quadratic objective function and linear constraints and is concave (for maximization

     problems). It can be solved by suitably modifying the linear programming techniques. It is

    usually formulated as follows:

    F(X) = ∑∑∑= ==

    ++n

    i

    n

     j

     jiij

    n

    i

    ii  x xQ xqc1 11

    Subject to

    ,1

     j

    n

    i

    iij b xa   =∑=

       j = 1,2,….,m 

     xi  , i = 1,2,….,n0≥

    where c, qi , Qij , aij, and  b j are constants.

    Classification based on the permissible values of the decision variables

    Under this classification, objective functions can be classified as integer and real-valued

     programming problems.

     A.BENHARI 20 A.BENHARI 20 A.BENHARI 20

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    24/252

    Optimization Methods: Introduction and Basic Concepts

    (i) Integer programming problem

    If some or all of the design variables of an optimization problem are restricted to take only

    integer (or discrete) values, the problem is called an integer programming problem. For

    example, the optimization is to find number of articles needed for an operation with least

    effort. Thus, minimization of the effort required for the operation being the objective, the

    decision variables, i.e. the number of articles used can take only integer values. Other

    restrictions on minimum and maximum number of usable resources may be imposed.

    (ii) Real-valued programming problem

    A real-valued problem is that in which it is sought to minimize or maximize a real function

     by systematically choosing the values of real variables from within an allowed set. When the

    allowed set contains only real values, it is called a real-valued programming problem.

    Classification based on deterministic nature of the variables

    Under this classification, optimization problems can be classified as deterministic or

    stochastic programming problems.

    (i) Deterministic programming problem

    In a deterministic system, for a same input, the system will produce the same output always.

    In this type of problems all the design variables are deterministic.

    (ii) Stochastic programming problem

    In this type of an optimization problem, some or all the design variables are expressed

     probabilistically (non-deterministic or stochastic). For example estimates of life span of

    structures which have probabilistic inputs of the concrete strength and load capacity is a

    stochastic programming problem as one can only estimate stochastically the life span of the

    structure.

    Classification based on separability of the functions

    Based on this classification, optimization problems can be classified as separable and non-

    separable programming problems based on the separability of the objective and constraint

    functions.

    (i) Separable programming problems

     A.BENHARI 21 A.BENHARI 21 A.BENHARI 21

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    25/252

    Optimization Methods: Introduction and Basic Concepts

    In this type of a problem the objective function and the constraints are separable. A function

    is said to be separable  if it can be expressed as the sum of n single-variable functions,

    , i.e.( ) ( ) ( )nni  x f  x f  x f  ,..., 221

    ( )∑=

    =n

    i

    ii  x f  X  f 1

    )(  

    and separable programming problem can be expressed in standard form as :

    Find X which minimizes ( )∑=

    =n

    i

    ii x f  X  f 

    1

    )(

    subject to

    ( ) j

    n

    i

    iij j b xg X g   ≤= ∑=1

    )(  ,  j = 1,2,. . . , m

    where b j is a constant.

    Classification based on the number of objective functions

    Under this classification, objective functions can be classified as single-objective and multi-

    objective programming problems.

    (i) Single-objective pro gramming problem in which there is only a single objective function.

    (ii) Multi-objective programming problem

    A multiobjective programming problem can be stated as follows:

    Find X which minimizes ( ) ( ) ( ) X  f  X  f  X  f  k ,..., 21  

    Subject to

    g j(X) ,  j = 1, 2, . . . , m 0≤

    where f 1 , f 2 , . . . f k  denote the objective functions to be minimized simultaneously.

    For example in some design problems one might have to minimize the cost and weight of the

    structural member for economy and, at the same time, maximize the load carrying capacity

    under the given constraints.

     A.BENHARI 22 A.BENHARI 22 A.BENHARI 22

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    26/252

    Optimization Methods: Introduction and Basic Concepts

    Classical and Advanced Techniques for Optimization

    In the previous lecture having understood the various classifications of optimization

     problems, let us move on to understand the classical and advanced optimization techniques.

    Classical Optimization Techniques

    The classical optimization techniques are useful in finding the optimum solution or

    unconstrained maxima or minima of continuous and differentiable functions. These are

    analytical methods and make use of differential calculus in locating the optimum solution.

    The classical methods have limited scope in practical applications as some of them involve

    objective functions which are not continuous and/or differentiable. Yet, the study of these

    classical techniques of optimization form a basis for developing most of the numerical

    techniques that have evolved into advanced techniques more suitable to today’s practical

     problems. These methods assume that the function is differentiable twice with respect to the

    design variables and that the derivatives are continuous. Three main types of problems can be

    handled by the classical optimization techniques, viz., single variable functions, multivariable

    functions with no constraints and multivariable functions with both equality and inequality

    constraints. For problems with equality constraints the Lagrange multiplier method can be

    used. If the problem has inequality constraints, the Kuhn-Tucker conditions can be used to

    identify the optimum solution. These methods lead to a set of nonlinear simultaneous

    equations that may be difficult to solve. These classical methods of optimization are further

    discussed in Module 2.

    The other methods of optimization include

    •  Linear programming: studies the case in which the objective function  f  is linear and

    the set A is specified using only linear equalities and inequalities. (A is the design

    variable space)

    •  Integer programming: studies linear programs in which some or all variables are

    constrained to take on integer values.

    •  Quadratic programming: allows the objective function to have quadratic terms,

    while the set A must be specified with linear equalities and inequalities.

     A.BENHARI 23 A.BENHARI 23 A.BENHARI 23 A.BENHARI 23

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    27/252

    Optimization Methods: Introduction and Basic Concepts

    •  Nonlinear programming: studies the general case in which the objective function or

    the constraints or both contain nonlinear parts.

    •  Stochastic programming: studies the case in which some of the constraints depend

    on random variables.•  Dynamic programming: studies the case in which the optimization strategy is based

    on splitting the problem into smaller sub-problems.

    •  Combinatorial optimization: is concerned with problems where the set of feasible

    solutions is discrete or can be reduced to a discrete one.

    •  Infinite-dimensional optimization: studies the case when the set of feasible solutions

    is a subset of an infinite-dimensional space, such as a space of functions.

    • 

    Constraint satisfaction: studies the case in which the objective function  f  is constant(this is used in artificial intelligence, particularly in automated reasoning).

    Most of these techniques will be discussed in subsequent modules.

    Advanced Optimization Techniques

    •  Hill climbing

    Hill climbing  is a graph search algorithm where the current path is extended with a

    successor node which is closer to the solution than the end of the current path.

    In simple hill climbing, the first closer node is chosen whereas in steepest ascent hill

    climbing all successors are compared and the closest to the solution is chosen. Both

    forms fail if there is no closer node. This may happen if there are local maxima in the

    search space which are not solutions. Steepest ascent hill climbing is similar to best

    first search but the latter tries all possible extensions of the current path in order,

    whereas steepest ascent only tries one.

    Hill climbing is used widely in artificial intelligence fields, for reaching a goal state

    from a starting node. Choice of next node starting node can be varied to give a

    number of related algorithms.

     A.BENHARI 24 A.BENHARI 24 A.BENHARI 24

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    28/252

    Optimization Methods: Introduction and Basic Concepts

    •  Simulated annealing

    The name and inspiration come from annealing process in metallurgy, a technique

    involving heating and controlled cooling of a material to increase the size of its

    crystals and reduce their defects. The heat causes the atoms to become unstuck from

    their initial positions (a local minimum of the internal energy) and wander randomly

    through states of higher energy; the slow cooling gives them more chances of finding

    configurations with lower internal energy than the initial one.

    In the simulated annealing method, each point of the search space is compared to a

    state of some physical system, and the function to be minimized is interpreted as the

    internal energy of the system in that state. Therefore the goal is to bring the system,

    from an arbitrary initial state, to a state with the minimum possible energy.

    •  Genetic algorithms

    A genetic algorithm (GA)  is a search technique used in computer science to find

    approximate solutions to optimization and search problems. Specifically it falls into

    the category of local search techniques and is therefore generally an incomplete

    search. Genetic algorithms are a particular class of evolutionary algorithms that use

    techniques inspired by evolutionary biology such as inheritance, mutation, selection,

    and crossover (also called recombination).

    Genetic algorithms are typically implemented as a computer simulation. in which a

     population of abstract representations (called chromosomes) of candidate solutions

    (called individuals) to an optimization problem, evolves toward better solutions.

    Traditionally, solutions are represented in binary as strings of 0s and 1s, but different

    encodings are also possible. The evolution starts from a population of completely

    random individuals and occur in generations. In each generation, the fitness of the

    whole population is evaluated, multiple individuals are stochastically selected from

    the current population (based on their fitness), and modified (mutated or recombined)

    to form a new population. The new population is then used in the next iteration of the

    algorithm.

     A.BENHARI 25 A.BENHARI 25 A.BENHARI 25

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    29/252

    Optimization Methods: Introduction and Basic Concepts

    •  Ant colony optimization

    In the real world, ants (initially) wander randomly, and upon finding food return to

    their colony while laying down pheromone trails. If other ants find such a path, they

    are likely not to keep traveling at random, but instead follow the trail laid by earlier

    ants, returning and reinforcing it, if they eventually find any food.

    Over time, however, the pheromone trail starts to evaporate, thus reducing its

    attractive strength. The more time it takes for an ant to travel down the path and back

    again, the more time the pheromones have to evaporate. A short path, by comparison,

    gets marched over faster, and thus the pheromone density remains high as it is laid on

    the path as fast as it can evaporate. Pheromone evaporation has also the advantage of

    avoiding the convergence to a local optimal solution. If there was no evaporation at

    all, the paths chosen by the first ants would tend to be excessively attractive to the

    following ones. In that case, the exploration of the solution space would be

    constrained.

    Thus, when one ant finds a good (short) path from the colony to a food source, other

    ants are more likely to follow that path, and such positive feedback eventually leaves

    all the ants following a single path. The idea of the ant colony algorithm is to mimic

    this behavior with "simulated ants" walking around the search space representing the

     problem to be solved.

    Ant colony optimization algorithms have been used to produce near-optimal solutions

    to the traveling salesman problem. They have an advantage over simulated annealing

    and genetic algorithm approaches when the graph may change dynamically. The ant

    colony algorithm can be run continuously and can adapt to changes in real time. This

    is of interest in network routing and urban transportation systems.

     A.BENHARI 26 A.BENHARI 26 A.BENHARI 26

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    30/252

    Optimization Methods: Optimization using Calculus-Stationary Points

    Stationary points: Functions of Single and Two Variables

    Introduction

    In this session, stationary points of a function are defined. The necessary and sufficient

    conditions for the relative maximum of a function of single or two variables are also

    discussed. The global optimum is also defined in comparison to the relative or local optimum.

    Stationary points

    For a continuous and differentiable function f ( x) a stationary  point x* is a point at which the

    slope of the function vanishes, i.e.  f ’( x) = 0 at  x = x*, where  x*  belongs to its domain of

    definition. 

    minimum inflection pointmaximum

    Fig. 1

    A stationary point may be a minimum, maximum or an inflection point (Fig. 1).

    Relative and Global Optimum

    A function is said to have a relative or local minimum at  x = x*  if * *( ) ( ) f x f x h≤ + for all

    sufficiently small positive and negative values of h, i.e. in the near vicinity of the point  x*.

    Similarly a point  x*  is called a relative or local maximum if * *( ) ( ) f x f x h≥ +

    *( ) ( )

    for all values

    of h sufficiently close to zero. A function is said to have a global or absolute minimum at x =

     x* if  f x f x≤ for all x in the domain over which  f ( x) is defined. Similarly, a function is

    D Nagesh Kumar, IISc, Bangalore M2L1

     A.BENHARI 27 A.BENHARI 27 A.BENHARI 27 A.BENHARI 27

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    31/252

    Optimization Methods: Optimization using Calculus-Stationary Points 2

    said to have a global or absolute maximum at x = x* if *( ) ( ) f x f x≥ for all x in the domain

    over which f ( x) is defined.

    Figure 2 shows the global and local optimum points.

    a b a b x  x

     f(x) f(x)

    .

    . .

    ..

    . A1

     B1

     B2

     A3

     A2 Relative minimum is alsoglobal optimum (since only

    one minimum point is there)

    A1, A2, A3 = Relative maximaA2 = Global maximum

    B1, B2 = Relative minimaB1 = Global minimum

    Fig. 2

    Functions of a single variable

    Consider the function f ( x) defined for a x b≤ ≤ . To find the value of x* ∈  such that x =

     x

    [ , ]a b

    *  maximizes  f ( x) we need to solve a single-variable optimization  problem. We have the

    following theorems to understand the necessary and sufficient conditions for the relative

    maximum of a function of a single variable.

    Necessary condition: For a single variable function  f ( x) defined for  x  which has a

    relative maximum at x = x

    [ , ]a b∈* , x*  ∈   if the derivative[ , ]a b ) /'( ) ( f X df x dx= exists as a finite

    number at x = x* then f  ‘( x*) = 0. This can be understood from the following.

     A.BENHARI 28 A.BENHARI 28 A.BENHARI 28

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    32/252

    Optimization Methods: Optimization using Calculus-Stationary Points

     Proof.

    Since f  ‘( x*) is stated to exist, we have

    '

    0

    ( * ) ( *)( *) lim

    h

     f x h f x f x

    h→

    + −=   (1)

    From our earlier discussion on relative maxima we have ( *) ( * ) f x f x h≥ + for . Hence0h →

    ( * ) ( *)0

     f x h f x

    h

    + −≥   h < 0 (2)

    ( * ) ( *)0

     f x h f x

    h

    + −≤   h > 0 (3)

    which implies for substantially small negative values of h  we have and for

    substantially small positive values of h we have

    ( *) 0 f x   ≥

    ( *) 0 f x   ≤ . In order to satisfy both (2) and

    (3),  f ( *) x

    (

     = 0. Hence this gives the necessary condition for a relative maxima at  x = x* for

    ) x . f 

    It has to be kept in mind that the above theorem holds good for relative minimum as well.

    The theorem only considers a domain where the function is continuous and differentiable. It

    cannot indicate whether a maxima or minima exists at a point where the derivative fails to

    exist. This scenario is shown in Fig 3, where the slopes m1 and m2 at the point of a maxima

    are unequal, hence cannot be found as depicted by the theorem by failing for continuity. The

    theorem also does not consider if the maxima or minima occurs at the end point of the

    interval of definition, owing to the same reason that the function is not continuous, therefore

    not differentiable at the boundaries. The theorem does not say whether the function will have

    a maximum or minimum at every point where f  ‘( x) = 0, since this condition f  ‘( x) = 0 is for

    stationary points which include inflection points which do not mean a maxima or a minima.

    A point of inflection is shown already in Fig.1

     A.BENHARI 29 A.BENHARI 29 A.BENHARI 29

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    33/252

    Optimization Methods: Optimization using Calculus-Stationary Points

     f(x)

    a b x

    m2m1 f(x*)

     x*

    Fig. 3

    Sufficient condition: For the same function stated above let f  ’( x*) = f  ”( x*) = . . . = f  (n-1)( x*)

    = 0, but f  

    (n)

    ( x

    *

    ) 0, then it can be said that f  ( x≠ *

    ) is (a) a minimum value of f  ( x) if f  

    (n)

    ( x

    *

    ) > 0and n  is even; (b) a maximum value of  f   ( x) if  f   (n)( x*) < 0 and n  is even; (c) neither a

    maximum or a minimum if n is odd.

     Proof

    Applying the Taylor’s theorem with remainder after n terms, we have

    2 1( 1)

    ( * ) ( *) '( *) ''( *) ... ( *) ( * )2! ( 1)! !

    n nn nh h h

     f x h f x hf x f x f x f x hn n

    θ 

    −−+ = + + + + + +

    −  (4)

    for 0

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    34/252

    Optimization Methods: Optimization using Calculus-Stationary Points

     f   (n)( x*) is negative, with  f ( x) concave around  x*. When n  is odd,!n

    nhchanges sign with the

    change in the sign of h and hence the point x* is neither a maximum nor a minimum. In this

    case the point x* is called a point of inflection.

     Example 1.

    Find the optimum value of the function 2( ) 3 5 f x x x= + −  and also state if the function

    attains a maximum or a minimum.

    Solution

    '( ) 2 3 0 f x x= + = for maxima or minima.

    or x* = -3/2

    ''( *) 2 f x   = which is positive hence the point x* = -3/2 is a point of minima and the function

    attains a minimum value of -29/4 at this point.

     Example 2.

    Find the optimum value of the function and also state if the function attains a

    maximum or a minimum.

    4( ) ( 2) f x x= −

    Solution

    3'( ) 4( 2) 0 f x x= − =  for maxima or minima.

    or x = x* = 2 for maxima or minima.

    2''( *) 12( * 2) 0 f x x= − =  at  x* = 2

    '''( *) 24( * 2) 0 f x x= − =  at x* = 2

     A.BENHARI 31 A.BENHARI 31 A.BENHARI 31

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    35/252

    Optimization Methods: Optimization using Calculus-Stationary Points

    24* =′′′′  x f   at x* = 2

    Hence f n( x) is positive and n is even hence the point x = x* = 2 is a point of minimum and the

    function attains a minimum value of 0 at this point.

     Example 3.

    Analyze the function and classify the stationary points as

    maxima, minima and points of inflection.

    5 4 3( ) 12 45 40 5 f x x x x= − + +

    Solution

    4 3 2

    4 3 2

    '( ) 60 180 120 0

      3 2 0

    or 0,1,2

     f x x x x

     x x x

     x

    = − + =

    => − + =

    =

     

    Consider the point x =x*

     = 0

    '' * * 3 * 2 *( ) 240( ) 540( ) 240 0 f x x x x= − + =

     at x * = 0

    ''' * * 2 *( ) 720( ) 1080 240 240 f x x x= − + = at x * = 0

    Since the third derivative is non-zero, x = x* = 0 is neither a point of maximum or minimum

     but it is a point of inflection.

    Consider x = x* = 1

    '' * * 3 * 2 *( ) 240( ) 540( ) 240 60 f x x x x= − + =  at x* = 1

    Since the second derivative is negative the point x = x* = 1 is a point of local maxima with a

    maximum value of f ( x) = 12 – 45 + 40 + 5 = 12

     A.BENHARI 32 A.BENHARI 32 A.BENHARI 32

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    36/252

    Optimization Methods: Optimization using Calculus-Stationary Points

    Consider x = x* = 2

    '' * * 3 * 2 *( ) 240( ) 540( ) 240 240 f x x x x= − + =  at x* = 2

    Since the second derivative is positive, the point x = x* = 2 is a point of local minima with a

    minimum value of f ( x) = -11

     Example 4.

    The horse power generated by a Pelton wheel is proportional to u(v-u) where u is the velocity

    of the wheel, which is variable and v is the velocity of the jet which is fixed. Show that the

    efficiency of the Pelton wheel will be maximum at u = v/ 2.

    Solution

    K. ( )

    0 K 2K  

    or

    2

    0

     f u v u

     f v u

    u

    vu

    = −

    ∂= => − =

    =

     

    where K is a proportionality constant (assumed positive).

    2

    2

    2

    2K v

    u

     f 

    u=

    ∂= −

    ∂which is negative.

    Hence, f  is maximum at2vu =  

    Functions of two variables

    This concept may be easily extended to functions of multiple variables. Functions of two

    variables are best illustrated by contour maps, analogous to geographical maps. A contour is a

    line representing a constant value of  f ( x) as shown in Fig.4. From this we can identify

    maxima, minima and points of inflection.

     A.BENHARI 33 A.BENHARI 33 A.BENHARI 33

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    37/252

    Optimization Methods: Optimization using Calculus-Stationary Points

    Necessary conditions

    As can be seen in Fig. 4 and 5, perturbations from points of local minima in any direction

    result in an increase in the response function  f ( x), i.e. the slope of the function is zero at this

     point of local minima. Similarly, at maxima and  points of inflection as the slope is zero, the

    first derivatives of the function with respect to the variables are zero.

    Which gives us1 2

    0; 0 f f 

     x x

    ∂ ∂= =

    ∂ ∂ at the stationary points, i.e., the gradient vector of f (X),  x f Δ  

    at X = X* = [ x1 , x2] defined as follows, must equal zero:

    1

    2

    ( *)

    0

    ( *) x

     f 

     x f 

     f 

     x

    ∂⎡ ⎤Χ⎢ ⎥∂⎢ ⎥Δ = =

    ∂⎢ ⎥Χ⎢ ⎥∂⎣ ⎦

     

    This is the necessary condition.

    x2

    x1

    Fig. 4 

     A.BENHARI 34 A.BENHARI 34 A.BENHARI 34

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    38/252

    Optimization Methods: Optimization using Calculus-Stationary Points

    Global maxima

    Relative maxima

    Relative minima

    Global minima

    Fig. 5

    Sufficient conditions

    Consider the following second order derivatives:

    2 2 2

    2 2

    1 2 1

    ; ;2

     f f f 

     x x x x

    ∂ ∂ ∂

    ∂ ∂ ∂ ∂ 

    The Hessian matrix defined by H is made using the above second order derivatives.

     A.BENHARI 35 A.BENHARI 35 A.BENHARI 35

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    39/252

    Optimization Methods: Optimization using Calculus-Stationary Points

    1 2

    2 2

    2

    1 1 2

    2 2

    2

    1 2 2 [ , ] x x

     f f 

     x x x

     f f 

     x x x

    ⎛ ⎞∂ ∂⎜ ⎟

    ∂ ∂ ∂⎜ ⎟=⎜ ⎟∂ ∂⎜ ⎟⎜ ⎟∂ ∂ ∂⎝ ⎠

    H  

    a) 

    If H is positive definite then the point X = [ x1 , x2] is a point of local minima.

     b)  If H is negative definite then the point X = [ x1 , x2] is a point of local maxima.

    c)  If H is neither then the point X = [ x1 , x2] is neither a point of maxima nor minima.

    A square matrix is positive definite if all its eigen values are positive and it is negative

    definite if all its eigen values are negative. If some of the eigen values are positive and some

    negative then the matrix is neither positive definite or negative definite.

    To calculate the eigen values λ of a square matrix then the following equation is solved.

    0λ − =A I  

    The above rules give the sufficient conditions for the optimization problem of two variables.

    Optimization of multiple variable problems will be discussed in detail in lecture notes 3

    (Module 2).

     Example 5.

    Locate the stationary points of f (X) and classify them as relative maxima, relative minima or

    neither based on the rules discussed in the lecture.

     f (X) = + 5

     A.BENHARI 36 A.BENHARI 36 A.BENHARI 36

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    40/252

    Optimization Methods: Optimization using Calculus-Stationary Points

    Solution

    From2

    (X) 0 f 

     x

    ∂=

    ∂, 1 22 2 x x= +  

    From1

    (X) 0 f 

     x

    ∂=

    ∂ 

    2

    2 28 14 3 x x 0+ + =  

    2 2(2 3)(4 1) 0 x x+ + =  

    2 23/ 2 or 1/ 4 x x= − = −  

    so the two stationary points are

    X1 = [-1,-3/2]

    and

    X2 = [3/2,-1/4]

    The Hessian of f (X) is

    2 2 2 2

    12 2

    1 2 1 2 2 1

    4 ; 4; 2 f f f f  x x x x x x x

    ∂ ∂ ∂ ∂= = = =∂ ∂ ∂ ∂ ∂ ∂

      −  

     A.BENHARI 37 A.BENHARI 37 A.BENHARI 37

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    41/252

    Optimization Methods: Optimization using Calculus-Stationary Points

    14 2

    2 4

     x   −⎡ ⎤= ⎢ ⎥−⎣ ⎦

    H  

    1

    4 2

    2 4

     xλ λ 

    λ 

    −= −

    I - H  

    At X1= [-1,-3/2],

    4 2( 4)( 4) 4

    2 4

    λ λ λ λ 

    λ 

    += = + −

    −I - H 0− =  

    2 16 4 0λ    − − =  

    2λ   = 12

    1 212 12λ λ = + = −  

    Since one eigen value is positive and one negative, X1  is neither a relative maximum nor a

    relative minimum.

    At X2 = [3/2,-1/4]

    6 2( 6)( 4) 4

    2 4

    λ λ λ λ 

    λ 

    −= = − − −

    −I - H 0=  

    2 10 20 0λ λ − + =  

    1 25 5 5 5λ λ = + = −  

    Since both the eigen values are positive, X2 is a local minimum.

    Minimum value of f(x) is -0.375.

     A.BENHARI 38 A.BENHARI 38 A.BENHARI 38

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    42/252

    Optimization Methods: Optimization using Calculus-Stationary Points

     Example 6

    The ultimate strength attained by concrete is found to be based on a certain empirical

    relationship between the ratios of cement and concrete used. Our objective is to maximize

    strength attained by hardened concrete, given by f (X) = , where x2 21 1 2 220 2 6 3 / 2 x x x x+ − + − 1 

    and x2 are variables based on cement and concrete ratios.

    Solution

    Given f (X) = ; where X =21 1 2 220 2 6 3 / 2 x x x x+ − + −2 [ ]1 2, x x  

    The gradient vector1 1

    2

    2

    ( *)2 2 0

    6 3 0( *)

     x

     f 

     x  x f 

     x f 

     x

    ∂⎡ ⎤Χ⎢ ⎥∂   −⎡ ⎤   ⎡ ⎤⎢ ⎥Δ = = =⎢ ⎥   ⎢ ⎥−∂⎢ ⎥   ⎣ ⎦⎣ ⎦Χ⎢ ⎥∂⎣ ⎦

    , to d etermine stationary point X*.

    Solving we get X* = [1,2]

    2 2 2

    2 2

    1 2 1 2

    2; 3; 0 f f f 

     x x x x

    ∂ ∂ ∂= − = −

    ∂ ∂ ∂ ∂  =  

    2 0

    0 3

    −⎡ ⎤= ⎢ ⎥−⎣ ⎦

    H  

    2 0( 2)( 3)

    0 3

    λ 

    λ λ λ 

    +

    = = + ++I - H 0λ    =  

    Here the values of λ do not depend on X and 1λ   = -2, 2λ   = -3. Since both the eigen values

    are negative,  f (X) is concave and the required ratio  x1: x2  = 1:2 with a global maximum

    strength of f (X) = 27 units.

     A.BENHARI 39 A.BENHARI 39 A.BENHARI 39

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    43/252

    Optimization Methods: Optimization using Calculus-Convexity and Concavity

    Convexity and Concavity of Functions of One and Two Variables

    Introduction

    In the previous class we studied about stationary points and the definition of relative and

    global optimum. The necessary and sufficient conditions required for a relative optimum in

    functions of one variable and its extension to functions of two variables was also studied. In

    this lecture, determination of the convexity and concavity of functions is discussed.

    The analyst must determine whether the objective functions and constraint equations are

    convex or concave. In real-world problems, if the objective function or the constraints are not

    convex or concave, the problem is usually mathematically intractable.

    Functions of one variable

    Convex function

    A real-valued function  f  defined on an interval (or on any convex subset C  of some vector

    space) is called convex, if for any two points a and b in its domain C  and any t  in [0,1], we

    have

    )()1()())1(( b f t atf bt ta f    −+≤−+  

    Fig. 1

    In other words, a function is convex if and only if its epigraph (the set of points lying on or

    above the graph) is a convex set. A function is also said to be strictly convex if

     A.BENHARI 40 A.BENHARI 40 A.BENHARI 40 A.BENHARI 40

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    44/252

    Optimization Methods: Optimization using Calculus-Convexity and Concavity

    ( (1 ) ) ( ) (1 ) ( ) f ta t b tf a t f b+ − < + −  

    for any t  in (0,1) and a line connecting any two points on the function lies completely above

    the function. These relationships are illustrated in Fig. 1.

    Testing for convexity of a single variable function

    A function is convex if its slope is non decreasing or 2 2 f  /  x∂ ∂ ≥  0. It is strictly convex if its

    slope is continually increasing or > 0 throughout the function.

    Properties of convex functions

    A convex function  f , defined on some convex open interval C , is continuous on C   and

    differentiable at all or at most, countable many points. If C   is closed, then  f  may fail to be

    continuous at the end points of C .

    A continuous function on an interval C  is convex if and only if

    ( ) ( )

    2 2

    a b f a f b f 

      + +⎛ ⎞≤⎜ ⎟

    ⎝ ⎠ 

    for all a and b in C .

    A differentiable function of one variable is convex on an interval if and only if its derivative

    is monotonically non-decreasing on that interval.

    A continuously differentiable function of one variable is convex on an interval if and only if

    the function lies above all of its tangents: ( ) ( ) '( )( ) f b f a f a b a≥ + − for all a and b in the

    interval.

    A twice differentiable function of one variable is convex on an interval if and only if its

    second derivative is non-negative in that interval; this gives a practical test for convexity. If

    its second derivative is positive then it is strictly convex, but the converse does not hold, as

    shown by f ( x) = x4.

    More generally, a continuous, twice differentiable function of several variables is convex on

    a convex set if and only if its Hessian matrix is positive semi definite on the interior of the

    convex set.

     A.BENHARI 41 A.BENHARI 41 A.BENHARI 41

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    45/252

    Optimization Methods: Optimization using Calculus-Convexity and Concavity

    If two functions f  and g are convex, then so is any weighted combination a  f  + b g with non-

    negative coefficients a and b. Likewise, if f  and g are convex, then the function max{ f ,g} is

    convex.

    A strictly convex function will have only one minimum which is also the global minimum.

    Examples

    •  The second derivative of x2 is 2; it follows that x2 is a convex function of x.

    •  The absolute value function | x| is convex, even though it does not have a derivative at

     x = 0.

    •  The function f  with domain [0,1] defined by f (0)= f (1)=1, f ( x)=0 for 0

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    46/252

    Optimization Methods: Optimization using Calculus-Convexity and Concavity

    [0,1], ( (1 ) ) ( ) (1 ) ( )t f ta t b tf a t f  ∀ ∈ + − ≥ + − b  

    Additionally, f ( x) is strictly concave if

    [0,1], ( (1 ) ) ( ) (1 ) ( )t f ta t b tf a t f b∀ ∈ + − > + −

     These relationships are illustrated in Fig. 2

    Fig. 2

    Testing for concavity of a single variable function

    A function is concave if its slope is non increasing or 2 2/ f x∂ ∂ ≤  0. It is strictly concave if its

    slope is continually decreasing or 2 2/ f x∂ ∂ < 0 throughout the function.

    Properties of a concave functions

    A continuous function on C  is concave if and only if

    ( ) ( )

    2 2

    a b f a f b f 

      + +⎛ ⎞≥⎜ ⎟

    ⎝ ⎠ 

    for any x and y in C .

    Equivalently,  f ( x) is concave on [a, b] if and only if the function − f ( x) is convex on every

    subinterval of [a, b].

     A.BENHARI 43 A.BENHARI 43 A.BENHARI 43

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    47/252

    Optimization Methods: Optimization using Calculus-Convexity and Concavity

    If  f ( x) is twice-differentiable, then  f ( x) is concave if and only if  f   ′′( x) is non-positive. If its

    second derivative is negative then it is strictly concave, but the opposite is not true, as shown

     by f ( x) = - x4.

    A function is called quasiconcave if and only if there is an  x0 such that for all x <  x0, f ( x) is

    non-decreasing while for all  x  >  x0  it is non-increasing.  x0  can also be , making the

    function non-decreasing (non-increasing) for all

    ±∞

     x. The opposite of quasiconcave is

    quasiconvex.

     Example 1

    Consider the example in lecture notes 1 for a function of two variables. Locate the stationary

     points of and find out if the function is convex, concave or

    neither at the points of optima based on the testing rules discussed above.

    5 4 3

    ( ) 12 45 40 5 f x x x x= − + +

    Solution

    4 3 2

    4 3 2

    '( ) 60 180 120 0

      3 2 0

    or 0,1,2

     f x x x x

     x x x

     x

    = − + =

    => − + =

    =

     

    Consider the point x =x* = 0

    3 2

    ''( *) 240( *) 540( *) 240 * 0 f x x x x= − + =

     at x * = 0

    2'''( *) 720( *) 1080 * 240 240 f x x x= − + = at x * = 0

    Since the third derivative is non-zero x = x* = 0 is neither a point of maximum or minimum

     but it is a point of inflection. Hence the function is neither convex nor concave at this point.

    Consider x = x* = 1

    3 2''( *) 240( *) 540( *) 240 * 60 f x x x x= − + =  at x* = 1

    Since the second derivative is negative, the point x = x* = 1 is a point of local maxima with a

    maximum value of f ( x) = 12 – 45 + 40 + 5 = 12. At this point the function is concave since

    2 2/ f x∂ ∂ < 0.

    Consider x = x* = 2

    3 2''( *) 240( *) 540( *) 240 * 240 f x x x x= − + =  at x* = 2

    Since the second derivative is positive, the point x = x* = 2 is a point of local minima with a

    minimum value of f ( x) = -11. At this point the function is convex since 2 2/ f x∂ ∂ > 0.

     A.BENHARI 44 A.BENHARI 44 A.BENHARI 44

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    48/252

    Optimization Methods: Optimization using Calculus-Convexity and Concavity

    Functions of two variables

    A function of two variables, f (X) where X is a vector = [x1,x2], is strictly convex if

    1 2 1( (1 ) ) ( ) (1 ) ( ) f t t tf t f Χ + − Χ < Χ + − Χ2

    2

     

    where X1  and X2  are points located by the coordinates given in their respective vectors.

    Similarly a two variable function is strictly concave if

    1 2 1( (1 ) ) ( ) (1 ) ( ) f t t tf t f Χ + − Χ > Χ + − Χ  

    Contour plot of a convex function is illustrated in Fig. 3

    340

    Fig. 3

    Contour plot of a convex function is shown in Fig. 4

    Fig. 4

    450

    70x2

    120

    x1

    x2

    x1

    110

    210

    305

    40

     A.BENHARI 45 A.BENHARI 45 A.BENHARI 45

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    49/252

    Optimization Methods: Optimization using Calculus-Convexity and Concavity

    To determine convexity or concavity of a function of multiple variables, the eigenvalues of

    its Hessian matrix are examined and the following rules apply.

    (a)  If all eigenvalues of the Hessian are positive the function is strictly convex.

    (b) If all eigenvalues of the Hessian are negative the function is strictly concave.

    (c)  If some eigenvalues are positive and some are negative, or if some are zero, the

    function is neither strictly concave nor strictly convex.

     Example 2

    Consider the example in lecture notes 1 for a function of two variables. Locate the stationary

     points of  f (X) and find out if the function is convex, concave or neither at the points of

    optima based on the rules discussed in this lecture.

     f (X) = 3 21 1 2 1 22 / 3 2 5 2 4 2 x x x x x x− − + + + 5

    Solution

    21 1 2

    1 2

    2

    ( *)02 2 5

    02 4 4( *)

     x

     f 

     x  x x f 

     f   x x

     x

    ∂⎡ ⎤Χ⎢ ⎥∂   ⎡ ⎤− −   ⎡ ⎤⎢ ⎥Δ = = =⎢ ⎥   ⎢ ⎥∂⎢ ⎥   − + +   ⎣ ⎦⎣ ⎦Χ⎢ ⎥∂⎣ ⎦

     

    Solving the above the two stationary points are

    X1 = [-1,-3/2]

    and

    X2 = [3/2,-1/4]

    The Hessian of f (X) is

    2 2 2 2

    12 2

    1 2 1 2 2 1

    4 ; 4; 2 f f f f 

     x x x x x x x

    ∂ ∂ ∂ ∂= = = =

    ∂ ∂ ∂ ∂ ∂ ∂  −  

    14 2

    2 4

     x   −⎡ ⎤= ⎢ ⎥−⎣ ⎦

    H  

    14 2

    2 4

     xλ λ 

    λ 

    −=

    −I - H  

    At X1

    4 2( 4)( 4) 4

    2 4

    λ λ λ λ 

    λ 

    += = + −

    −I - H 0− =  

    2 16 4 0λ    − − =  

     A.BENHARI 46 A.BENHARI 46 A.BENHARI 46

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    50/252

    Optimization Methods: Optimization using Calculus-Convexity and Concavity

    2λ   = 12

    1 212 12λ λ = + = −  

    Since one eigen value is positive and one negative, X1  is neither a relative maximum nor a

    relative minimum. Hence at X1 the function is neither convex nor concave.

    At X2 = [3/2,-1/4]

    6 2( 6)( 4) 4

    2 4

    λ λ λ λ 

    λ 

    −= = − − −

    −I - H 0=  

    2 10 20 0λ λ − + =  

    1 25 5 5 5λ λ = + = −

    Since both the eigen values are positive, X2 is a local minimum, and the function is convex at

    this point as both the eigen values are positive.

     A.BENHARI 47 A.BENHARI 47 A.BENHARI 47 A.BENHARI 47

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    51/252

    Optimization Methods: Optimization using Calculus - Unconstrained Optimization

    Optimization of Functions of Multiple Variables: Unconstrained Optimization

    Introduction

    In the previous lectures we learnt how to determine the convexity and concavity of functions

    of single and two variables. For functions of single and two variables we also learnt

    determining stationary points and examining higher derivatives to check for convexity and

    concavity, and tests were recommended to evaluate stationary points as local minima, local

    maxima or points of inflection.

    In this lecture functions of multiple variables, which are more difficult to be analyzed owing

    to the difficulty in graphical representation and tedious calculations involved in mathematical

    analysis, will be studied for unconstrained optimization. This is done with the aid of the

    gradient vector and the Hessian matrix. Examples are discussed to show the implementation

    of the technique.

    Unconstrained optimization

    If a convex function is to be minimized, the stationary point is the global minimum and

    analysis is relatively straightforward as discussed earlier. A similar situation exists for

    maximizing a concave variable function. The necessary and sufficient conditions for the

    optimization of unconstrained function of several variables are given below.

    Necessary condition

    In case of multivariable functions a necessary condition for a stationary point of the function

     f (X) is that each partial derivative is equal to zero. In other words, each element of the

    gradient vector defined below must be equal to zero.

    i.e. the gradient vector of f (X), x f Δ  at X=X*, defined as follows, must be equal to zero:

    1

    2

    ( *)

    ( *)

    ( *)

     x

    n

     f 

     x

     f 

     x f 

     f 

    dx

    ∂⎡ ⎤Χ⎢ ⎥∂⎢ ⎥

    ∂⎢ ⎥Χ⎢ ⎥∂

    ⎢ ⎥Δ =⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥∂

    Χ⎢ ⎥∂⎣ ⎦

    M

    M

    = 0

     A.BENHARI 48 A.BENHARI 48 A.BENHARI 48 A.BENHARI 48

  • 8/20/2019 Abdelkader BENHARI Optimisation Notes.pdf

    52/252