Upload
hanhi
View
220
Download
5
Embed Size (px)
Citation preview
Linear & Quadratic Programming
Chee Wei Tan
CS 8292 : Advanced Topics in Convex Optimization and its ApplicationsFall 2010
Outline
• Linear programming
• Norm minimization problems
• Dual linear programming
• Algorithms
• Quadratic constrained quadratic programming (QCQP)
• Least-squares
• Second order cone programming (SOCP)
• Dual quadratic programming
Acknowledgement: Thanks to Mung Chiang (Princeton), Stephen Boyd(Stanford) and Steven Low (Caltech) for the course materials in this class.
1
Linear Programming
Minimize linear function over linear inequality and equality constraints:
minimize cTxsubject to Gx h
Ax = b
Variables: x ∈ Rn.
Standard form LP:
minimize cTxsubject to Ax = b
x 0
Most well-known, widely-used and efficiently-solvable optimization
Appreciation-Application cycle starting for convex optimization
2
Transformation To Standard Form
Introduce slack variables si for inequality constraints:
minimize cTxsubject to Gx+ s = h
Ax = bs 0
Express x as difference between two nonnegative variables x+, x− 0:x = x+ − x−
minimize cTx+ − cTx−subject to Gx+ −Gx− + s = h
Ax+ −Ax− = bx+, x−, s 0
Now in LP standard form with variables x+, x−, s
3
Linear Fractional Programming
Minimize ratio of affine functions over polyhedron:
minimize cTx+deTx+f
subject to Gx hAx = b
Domain of objective function: x|eTx+ f > 0
Not an LP. But if nonempty feasible set, transformation into an equivalent LPwith variables y, z:
minimize cTy + dzsubject to Gy − hz 0
Ay − bz = 0eTy + fz = 1z 0
Why: let y = xeTx+f
and z = 1eTx+f
“Charnes-Cooper” Trick
4
Norm Minimization Problems
• l1 norm: ‖x‖1 =∑ni=1 |xi|
Minimize ‖Ax− b‖1 is equivalent to this LP in x ∈ Rn, s ∈ Rn:
minimize 1Tssubject to Ax− b s
Ax− b −s
• l∞ norm: ‖x‖∞ = maxi|xi|
Minimize ‖Ax− b‖∞ is equivalent to this LP in x ∈ Rn, t ∈ R:
minimize t
subject to Ax− b t1Ax− b −t1
5
Dual Linear Programming
1. Primal problem in standard form:
minimize cTx
subject to Ax = b
x 0
2. Write down Lagrangian using Lagrange multipliers λ, ν:
L(x, λ, ν) = cTx−n∑i=1
λixi+νT (Ax−b) = −bTν+(c+ATν−λ)Tx
3. Find Lagrange dual function:
g(λ, ν) = infxL(x, λ, ν) = −bTν + inf
x[(c+ATν − λ)Tx]
6
Since a linear function is bounded below only if it is identically zero,
we have
g(λ, ν) =−bTν ATν − λ+ c = 0−∞ otherwise.
7
Dual Linear Programming
4. Write down Lagrange dual problem:
maximize g(λ, ν) =−bTν ATν − λ+ c = 0−∞ otherwise
subject to λ 0
5. Make equality constraints explicit:
maximize −bTνsubject to ATν − λ+ c = 0
λ 0
8
6. Simplify Lagrange dual problem:
maximize −bTνsubject to ATν + c 0
which is an inequality constrained LP
9
Basic Properties
Definition: x in polyhedron P is an extreme point if there does not exist twoother points y, z ∈ P such that x = θy + (1− θ)z for some θ ∈ [0, 1]
Theorem: Assume that a LP in standard form is feasible and the optimal objectivevalue is finite. There exists an optimal solution which is an extreme point
P
x∗
−c
10
Algorithms
• Simplex Method
• Interior-point Method
• Ellipsoid Method
• Cutting-plane Method
Simplex method is very efficient in practice but specialized for LP:
move from one vertex to another without enumerating all the
vertices
Interior point algorithms are fierce competitors of Simplex since 1984
11
Convex QCQP
• (Convex) QP (with linear constraints) in x:
minimize (1/2)xTPx+ qTx+ r
subject to Gx hAx = b
where P ∈ Sn+, G ∈ Rm×n, A ∈ Rp×n
• (Convex) QCQP in x:
minimize (1/2)xTP0x+ qT0 x+ r0
subject to (1/2)xTPix+ qTi x+ ri ≤ 0, i = 1, 2, . . . ,mAx = b
12
where P ∈ Sn+, i = 0, . . . ,m
P
x∗
−∇f0(x∗)
13
Least-squares
• Minimize ‖Ax − b‖22 = xTATAx − 2bTAx + bTb over
x. Unconstrained QP, Regression analysis, Least-squares
approximation
Analytic solution: x∗ = A†b where, for A ∈ Rm×n, A† =(ATA)−1AT if rank of A is n, and A† = AT (AAT )−1 if rank
of A is m. If not full rank, then by singular value decomposition.
• Constrained least-squares (no general analytic solution). For
example:
minimize ‖Ax− b‖22subject to li ≤ xi ≤ ui, i = 1, . . . , n
14
LP with Random Cost
minimize cTx
subject to Gx hAx = b
Cost c ∈ Rn is random, with mean c and covariance Ω
Expected cost: cTx. Cost variance xTΩx
Minimize both expected cost and cost variance (with a weight γ):
minimize cTx+ γxTΩxsubject to Gx h
Ax = b
15
SOCP
Second Order Cone Programming:
minimize fTx
subject to ‖Aix+ bi‖2 ≤ cTi x+ di, i = 1, . . . ,mFx = g
Variables: x ∈ Rn. And Ai ∈ Rni×n, F ∈ Rp×n
If ci = 0, ∀i, SOCP is equivalent to QCQP If Ai = 0, ∀i, SOCP is
equivalent to LP
16
Robust LP
Consider inequality constrained LP:
minimize cTx
subject to aTi x ≤ bi, i = 1, . . . ,m
Parameters ai are not accurate. They are only known to lie in given
ellipsoids described by ai and Pi ∈ Rn×n:
ai ∈ Ei = ai + Piu|‖u‖2 ≤ 1
Since supaTi x|ai ∈ E = aTi x+ ‖PTi x‖2,
Robust LP (satisfy constraints for all possible ai) formulated as
17
SOCP:
minimize cTx
subject to aTi x+ ‖PTi x‖2 ≤ bi, i = 1, . . . ,m
18
Dual QCQP
Primal (convex) QCQP
minimize (1/2)xTP0x+ qT0 x+ r0
subject to (1/2)xTPix+ qTi x+ ri ≤ 0, i = 1, 2, . . . ,mAx = b
Lagrangian: L(x, λ) = (1/2)xTP (λ)x+ q(λ)Tx+ r(λ) where
P (λ) = P0 +m∑i=1
λiPi, q(λ) = q0 +m∑i=1
λiqi, r(λ) = r0 +m∑i=1
λiri
Since λ 0, we have P (λ) 0 if P0 0 and
g(λ) = infxL(x, λ) = −(1/2)q(λ)TP (λ)−1q(λ) + r(λ)
19
Lagrange dual problem:
maximize −(1/2)q(λ)TP (λ)−1q(λ) + r(λ)subject to λ 0
20
KKT Conditions for QP
Primal (convex) QP with linear equality constraints:
minimize (1/2)xTPx+ qTx+ r
subject to Ax = b
KKT conditions:
Ax∗ = b, Px∗ + q +ATν∗ = 0
which can be written in matrix form:[P AT
A 0
] [x∗
ν∗
]=[−qb
]Solving a system of linear equations is equivalent to solving equality
constrained convex quadratic minimization
21
Summary
• LP covers a wide range of interesting problems and applications
• Dual LP is LP
• First type of nonlinearity: quadratic
• Least-squares
• Nonlinear problems that are or can be converted into convex
optimization: QCQP (SOCP). Covers LP as special case
Reading assignment: Sections 4.3-4.4 and 6.1-6.2 of textbook.
22