View
60
Download
0
Category
Preview:
DESCRIPTION
Non-Linear Programming. Motivation. Method of Lagrange multipliers Very useful insight into solutions Analytical solution practical only for small problems Direct application not practical for real-life problems because these problems are too large Difficulties when problem is non-convex - PowerPoint PPT Presentation
Citation preview
Non-Linear Programming
© 2011 Daniel Kirschen and University of Washington
1
Motivation• Method of Lagrange multipliers
– Very useful insight into solutions– Analytical solution practical only for small problems– Direct application not practical for real-life problems
because these problems are too large– Difficulties when problem is non-convex
• Often need to search for the solution of practical optimization problems using:– Objective function only or– Objective function and its first derivative or– Objective function and its first and second derivatives
© 2011 Daniel Kirschen and University of Washington 2
Naïve One-Dimensional Search
• Suppose: – That we want to find the value of x that minimizes
f(x)– That the only thing that we can do is calculate the
value of f(x) for any value of x• We could calculate f(x) for a range of values of
x and choose the one that minimizes f(x)
© 2011 Daniel Kirschen and University of Washington 3
x
f(x)
Naïve One-Dimensional Search
• Requires a considerable amount of computing time if range is large and a good accuracy is needed
© 2011 Daniel Kirschen and University of Washington 4
One-Dimensional Search
x
f(x)
x0
© 2011 Daniel Kirschen and University of Washington 5
One-Dimensional Search
x
f(x)
x0 x1
© 2011 Daniel Kirschen and University of Washington 6
One-Dimensional Search
x
f(x)
x0 x1 x2
If the function is convex, we have bracketed the optimum
Current searchrange
© 2011 Daniel Kirschen and University of Washington 7
One-Dimensional Search
x
f(x)
x0 x1 x2x3
Optimum is between x1 and x2
We do not need to consider x0 anymore
© 2011 Daniel Kirschen and University of Washington 8
One-Dimensional Search
x
f(x)
x0 x1 x2x3
Repeat the process until the range is sufficiently small
x4
© 2011 Daniel Kirschen and University of Washington 9
One-Dimensional Search
x
f(x)
x0 x1 x2x3
The procedure is valid only if the function is convex!
x4
© 2011 Daniel Kirschen and University of Washington 10
Multi-Dimensional Search
• Unidirectional search not applicable• Naïve search becomes totally impossible as
dimension of the problem increases• If we can calculate the first derivatives of the
objective function, much more efficient searches can be developed
• The gradient of a function gives the direction in which it increases/decreases fastest
© 2011 Daniel Kirschen and University of Washington 11
Steepest Ascent/Descent Algorithm
x1
x2
© 2011 Daniel Kirschen and University of Washington 12
Steepest Ascent/Descent Algorithm
x1
x2
© 2011 Daniel Kirschen and University of Washington 13
Unidirectional Search
Gradient direction
Objective function
© 2011 Daniel Kirschen and University of Washington 14
Steepest Ascent/Descent Algorithm
x1
x2
© 2011 Daniel Kirschen and University of Washington 15
Steepest Ascent/Descent Algorithm
x1
x2
© 2011 Daniel Kirschen and University of Washington 16
Steepest Ascent/Descent Algorithm
x1
x2
© 2011 Daniel Kirschen and University of Washington 17
Steepest Ascent/Descent Algorithm
x1
x2
© 2011 Daniel Kirschen and University of Washington 18
Steepest Ascent/Descent Algorithm
x1
x2
© 2011 Daniel Kirschen and University of Washington 19
Steepest Ascent/Descent Algorithm
x1
x2
© 2011 Daniel Kirschen and University of Washington 20
Steepest Ascent/Descent Algorithm
x1
x2
© 2011 Daniel Kirschen and University of Washington 21
Steepest Ascent/Descent Algorithm
x1
x2
© 2011 Daniel Kirschen and University of Washington 22
Steepest Ascent/Descent Algorithm
x1
x2
© 2011 Daniel Kirschen and University of Washington 23
Steepest Ascent/Descent Algorithm
x1
x2
© 2011 Daniel Kirschen and University of Washington 24
Steepest Ascent/Descent Algorithm
x1
x2
© 2011 Daniel Kirschen and University of Washington 25
Choosing a Direction
• Direction of steepest ascent/descent is not always the best choice
• Other techniques have been used with varying degrees of success
• In particular, the direction chosen must be consistent with the equality constraints
© 2011 Daniel Kirschen and University of Washington 26
How far to go in that direction?• Unidirectional searches can be time-
consuming• Second order techniques that use information
about the second derivative of the objective function can be used to speed up the process
• Problem with the inequality constraints– There may be a lot of inequality constraints– Checking all of them every time we move in one
direction can take an enormous amount of computing time
© 2011 Daniel Kirschen and University of Washington 27
28
Handling of inequality constraints
© 2011 Daniel Kirschen and University of Washington
x1
x2
Move in the direction of the gradient
How do I know thatI have to stop here?
29
Handling of inequality constraints
© 2011 Daniel Kirschen and University of Washington
x1
x2 How do I know thatI have to stop here?
30
Penalty functions
© 2011 Daniel Kirschen and University of Washington
Penalty
xmaxxmin
31
Penalty functions
© 2011 Daniel Kirschen and University of Washington
• Replace enforcement of inequality constraints by addition of penalty terms to objective function
Penalty
xmaxxmin
K(x-xmax)2
32
Problem with penalty functions
© 2011 Daniel Kirschen and University of Washington
• Stiffness of the penalty function must be increased progressively to enforce the constraints tightly enough
• Not very efficient method
Penalty
xmaxxmin
33
Barrier functions
© 2011 Daniel Kirschen and University of Washington
Barrier cost
xmaxxmin
34
Barrier functions
© 2011 Daniel Kirschen and University of Washington
• Barrier must be made progressively closer to the limit• Works better than penalty function• Interior point methods
Barrier cost
xmaxxmin
Non-Robustness
© 2011 Daniel Kirschen and University of Washington 35
x1
x2
B
A
C
D
X Y
Different starting points may lead to different solutions if theproblem is not convex
Conclusions
• Very sophisticated non-linear programming methods have been developed
• They can be difficult to use: – Different starting points may lead to different
solutions– Some problems will require a lot of iterations– They may require a lot of “tuning”
© 2011 Daniel Kirschen and University of Washington 36
Recommended