Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
An active-set convex QP solver based onregularized KKT systems
Christopher MaesiCME, Stanford University
Michael SaundersSOL, Stanford University
BIRS Workshop 09w5101Advances and Perspectives on
Numerical Methods for Saddle Point ProblemsBanff, Alberta, Canada, April 12–17, 2009
Banff 2009 1/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Outline
1 Abstract
2 Motivation
3 SQOPT
4 QPBLU
5 Regularization
6 QPBLUR
7 Results
8 Conclusions
Banff 2009 2/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
An active-set convex QP solver based on regularized KKT systems
Implementations of the simplex method depend on “basis repair” tosteer around near-singular basis matrices, and KKT-based QPsolvers must deal with near-singular KKT systems. However, fewsparse-matrix packages have the required rank-revealing features(we know of LUSOL, MA48, MA57, and HSL MA77).
For convex QP, we explore the idea of avoiding singular KKTsystems by applying primal and dual regularization to the QPproblem. A simplified single-phase active-set algorithm can then bedeveloped. Warm starts are straightforward from any given activeset, and the range of applicable KKT solvers expands.
QPBLUR is a prototype QP solver that makes use of the block-LUKKT updates in QPBLU (Hanh Huynh’s PhD dissertation, 2008)but employs regularization and the simplified active-set algorithm.The aim is to provide a new QP subproblem solver for SNOPT forproblems with many degrees of freedom. Numerical results confirmthe robustness of the single-phase regularized QP approach.
Supported by the Office of Naval Research and AHPCRC
Banff 2009 3/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
An active-set convex QP solver based on regularized KKT systems
Implementations of the simplex method depend on “basis repair” tosteer around near-singular basis matrices, and KKT-based QPsolvers must deal with near-singular KKT systems. However, fewsparse-matrix packages have the required rank-revealing features(we know of LUSOL, MA48, MA57, and HSL MA77).
For convex QP, we explore the idea of avoiding singular KKTsystems by applying primal and dual regularization to the QPproblem. A simplified single-phase active-set algorithm can then bedeveloped. Warm starts are straightforward from any given activeset, and the range of applicable KKT solvers expands.
QPBLUR is a prototype QP solver that makes use of the block-LUKKT updates in QPBLU (Hanh Huynh’s PhD dissertation, 2008)but employs regularization and the simplified active-set algorithm.The aim is to provide a new QP subproblem solver for SNOPT forproblems with many degrees of freedom. Numerical results confirmthe robustness of the single-phase regularized QP approach.
Supported by the Office of Naval Research and AHPCRC
Banff 2009 3/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
An active-set convex QP solver based on regularized KKT systems
Implementations of the simplex method depend on “basis repair” tosteer around near-singular basis matrices, and KKT-based QPsolvers must deal with near-singular KKT systems. However, fewsparse-matrix packages have the required rank-revealing features(we know of LUSOL, MA48, MA57, and HSL MA77).
For convex QP, we explore the idea of avoiding singular KKTsystems by applying primal and dual regularization to the QPproblem. A simplified single-phase active-set algorithm can then bedeveloped. Warm starts are straightforward from any given activeset, and the range of applicable KKT solvers expands.
QPBLUR is a prototype QP solver that makes use of the block-LUKKT updates in QPBLU (Hanh Huynh’s PhD dissertation, 2008)but employs regularization and the simplified active-set algorithm.The aim is to provide a new QP subproblem solver for SNOPT forproblems with many degrees of freedom. Numerical results confirmthe robustness of the single-phase regularized QP approach.
Supported by the Office of Naval Research and AHPCRC
Banff 2009 3/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Motivation (SNOPT)
Banff 2009 4/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Motivation: SNOPT
SNOPT: an SQP method for constrained NLP (Gill, Murray, S 2005)
SQOPT: an active-set methodSolves a sequence of convex QP problems
minx
cTx + 12xTHx
Ax = b, l ≤ x ≤ u,
where c , H, A, b change (less and less)
Initially H diagonalLater H ← GTHG , G = (I + dvT )
Banff 2009 5/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Convex QP Solvers
Interior Active-set
LOQO
HOPDM SQOPT reduced-HessianQPB QPA
CPLEX QPBLU
IPOPT QPBLUR
All based on KKT systems (except SQOPT)
Banff 2009 6/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
SQOPT
Banff 2009 7/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
SQOPT
min cTx + 12xTHx st Ax = b, l ≤ x ≤ u
Reduced-gradient method (active-set method)
free variables fixed variables
AP = B S N
→ ←degrees of freedom
Banff 2009 8/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
SQOPT
min cTx + 12xTHx st Ax = b, l ≤ x ≤ u
Reduced-gradient method (active-set method)
free variables fixed variables
AP = B S N
→ ←degrees of freedom
Banff 2009 8/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
SQOPT
Any active-set method solves for search direction in Free variables:(HF AT
F
AF 0
)(∆xF
∆y
)≈(
r1
r2
)
Reduced-gradient method uses a specific ordering (Gill et al. 1990):
AF =(B S
)→
HBB BT HBS
B SHSB ST HSS
Reduced Hessian is Schur-complement of red block:
ZTHZ = HSS − (. . . )(. . . )−1(. . . )T
Likely to be dense ⇒ Need to work with original KKT system
Banff 2009 9/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
SQOPT
Any active-set method solves for search direction in Free variables:(HF AT
F
AF 0
)(∆xF
∆y
)≈(
r1
r2
)Reduced-gradient method uses a specific ordering (Gill et al. 1990):
AF =(B S
)→
HBB BT HBS
B SHSB ST HSS
Reduced Hessian is Schur-complement of red block:
ZTHZ = HSS − (. . . )(. . . )−1(. . . )T
Likely to be dense ⇒ Need to work with original KKT system
Banff 2009 9/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
SQOPT
Any active-set method solves for search direction in Free variables:(HF AT
F
AF 0
)(∆xF
∆y
)≈(
r1
r2
)Reduced-gradient method uses a specific ordering (Gill et al. 1990):
AF =(B S
)→
HBB BT HBS
B SHSB ST HSS
Reduced Hessian is Schur-complement of red block:
ZTHZ = HSS − (. . . )(. . . )−1(. . . )T
Likely to be dense ⇒ Need to work with original KKT system
Banff 2009 9/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
SQOPT
Any active-set method solves for search direction in Free variables:(HF AT
F
AF 0
)(∆xF
∆y
)≈(
r1
r2
)Reduced-gradient method uses a specific ordering (Gill et al. 1990):
AF =(B S
)→
HBB BT HBS
B SHSB ST HSS
Reduced Hessian is Schur-complement of red block:
ZTHZ = HSS − (. . . )(. . . )−1(. . . )T
Likely to be dense ⇒ Need to work with original KKT system
Banff 2009 9/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
SQOPT limitation
min cTx + 12xTHx st Ax = b, l ≤ x ≤ u
Reduced-gradient method
free variables fixed variables
AP = B S N
OK if 10000 2000 100000What if 1M 1M 10M
Banff 2009 10/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
SQOPT limitation
min cTx + 12xTHx st Ax = b, l ≤ x ≤ u
Reduced-gradient method
free variables fixed variables
AP = B S N
OK if 10000 2000 100000
What if 1M 1M 10M
Banff 2009 10/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
SQOPT limitation
min cTx + 12xTHx st Ax = b, l ≤ x ≤ u
Reduced-gradient method
free variables fixed variables
AP = B S N
OK if 10000 2000 100000What if 1M 1M 10M
Banff 2009 10/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLU
Banff 2009 11/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUF90 convex QP solver based on block-LU updates of K(Hanh Huynh 2008)
First,
K0 =
(HF AT
F
AF 0
)= L0D0L
T0 or L0U0
Later,
K ≈(
K0 VV T E
)=
(L0
ZT I
)(U0 Y
C
)
Active-set method of (Gill 2007) keeps K nonsingular
Y , Z sparse, C small
Quasi-Newton updates to H handled same way
L0,U0 from LUSOL, MA57, PARDISO, SuperLU, UMFPACK
Banff 2009 12/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUF90 convex QP solver based on block-LU updates of K(Hanh Huynh 2008)
First,
K0 =
(HF AT
F
AF 0
)= L0D0L
T0 or L0U0
Later,
K ≈(
K0 VV T E
)=
(L0
ZT I
)(U0 Y
C
)
Active-set method of (Gill 2007) keeps K nonsingular
Y , Z sparse, C small
Quasi-Newton updates to H handled same way
L0,U0 from LUSOL, MA57, PARDISO, SuperLU, UMFPACK
Banff 2009 12/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUF90 convex QP solver based on block-LU updates of K(Hanh Huynh 2008)
First,
K0 =
(HF AT
F
AF 0
)= L0D0L
T0 or L0U0
Later,
K ≈(
K0 VV T E
)=
(L0
ZT I
)(U0 Y
C
)
Active-set method of (Gill 2007) keeps K nonsingular
Y , Z sparse, C small
Quasi-Newton updates to H handled same way
L0,U0 from LUSOL, MA57, PARDISO, SuperLU, UMFPACK
Banff 2009 12/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUF90 convex QP solver based on block-LU updates of K(Hanh Huynh 2008)
First,
K0 =
(HF AT
F
AF 0
)= L0D0L
T0 or L0U0
Later,
K ≈(
K0 VV T E
)=
(L0
ZT I
)(U0 Y
C
)
Active-set method of (Gill 2007) keeps K nonsingular
Y , Z sparse, C small
Quasi-Newton updates to H handled same way
L0,U0 from LUSOL, MA57, PARDISO, SuperLU, UMFPACK
Banff 2009 12/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUF90 convex QP solver based on block-LU updates of K(Hanh Huynh 2008)
First,
K0 =
(HF AT
F
AF 0
)= L0D0L
T0 or L0U0
Later,
K ≈(
K0 VV T E
)=
(L0
ZT I
)(U0 Y
C
)
Active-set method of (Gill 2007) keeps K nonsingular
Y , Z sparse, C small
Quasi-Newton updates to H handled same way
L0,U0 from LUSOL, MA57, PARDISO, SuperLU, UMFPACK
Banff 2009 12/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUF90 convex QP solver based on block-LU updates of K(Hanh Huynh 2008)
First,
K0 =
(HF AT
F
AF 0
)= L0D0L
T0 or L0U0
Later,
K ≈(
K0 VV T E
)=
(L0
ZT I
)(U0 Y
C
)
Active-set method of (Gill 2007) keeps K nonsingular
Y , Z sparse, C small
Quasi-Newton updates to H handled same way
L0,U0 from LUSOL, MA57, PARDISO, SuperLU, UMFPACK
Banff 2009 12/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUF90 convex QP solver based on block-LU updates of K(Hanh Huynh 2008)
First,
K0 =
(HF AT
F
AF 0
)= L0D0L
T0 or L0U0
Later,
K ≈(
K0 VV T E
)=
(L0
ZT I
)(U0 Y
C
)
Active-set method of (Gill 2007) keeps K nonsingular
Y , Z sparse, C small
Quasi-Newton updates to H handled same way
L0,U0 from LUSOL, MA57, PARDISO, SuperLU, UMFPACK
Banff 2009 12/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Singular KKTs
MA27, MA57, . . . , HSL MA87 are rank-revealing if u ≥ 0.25 (say)For semidefinite QP,
K =
D1 AT1
AT2
A1 A2
, D1 � 0
Theorem. K is nonsingular iff(A1 A2
)has full row rank
A2 has full column rank
KKT Repair not yet implemented
Banff 2009 13/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Singular KKTs
MA27, MA57, . . . , HSL MA87 are rank-revealing if u ≥ 0.25 (say)For semidefinite QP,
K =
D1 AT1
AT2
A1 A2
, D1 � 0
Theorem. K is nonsingular iff(A1 A2
)has full row rank
A2 has full column rank
KKT Repair not yet implemented
Banff 2009 13/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Singular KKTs
MA27, MA57, . . . , HSL MA87 are rank-revealing if u ≥ 0.25 (say)For semidefinite QP,
K =
D1 AT1
AT2
A1 A2
, D1 � 0
Theorem. K is nonsingular iff(A1 A2
)has full row rank
A2 has full column rank
KKT Repair not yet implemented
Banff 2009 13/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Regularization
Banff 2009 14/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Regularization
PDQ1 Interior method for LP (S 1996), OSL (S and Tomlin 1996)
minx ,y
cTx + 12γ‖x‖
2 + 12δ‖y‖
2
Ax + δy = b, l ≤ x ≤ u
Indefinite LDLT on KKT is stable if γ, δ sufficiently large
HOPDM Interior method for QP (Altman and Gondzio 1999)
Smaller perturbation via dynamic proximal point terms:
12 (x − xk)TRp(x − xk), 1
2 (y − yk)TRd(y − xy )
Banff 2009 15/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Regularization
PDQ1 Interior method for LP (S 1996), OSL (S and Tomlin 1996)
minx ,y
cTx + 12γ‖x‖
2 + 12δ‖y‖
2
Ax + δy = b, l ≤ x ≤ u
Indefinite LDLT on KKT is stable if γ, δ sufficiently large
HOPDM Interior method for QP (Altman and Gondzio 1999)
Smaller perturbation via dynamic proximal point terms:
12 (x − xk)TRp(x − xk), 1
2 (y − yk)TRd(y − xy )
Banff 2009 15/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUR
Banff 2009 16/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUR
minx ,y
cTx + 12xTHx + 1
2δ‖x‖22 + 1
2µ‖y‖22
Ax + µy = b, l ≤ x ≤ u
(HF + δI AT
F
AF −µI
)(∆xF
∆y
)≈(
r1
r2
)
Nonsingular for any active set (any AF )
Always feasible (no Phase 1)
Can use LUSOL, MA57, UMFPACK, . . . without change
Can use Hanh’s block-LU updates without change
Banff 2009 17/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUR
minx ,y
cTx + 12xTHx + 1
2δ‖x‖22 + 1
2µ‖y‖22
Ax + µy = b, l ≤ x ≤ u
(HF + δI AT
F
AF −µI
)(∆xF
∆y
)≈(
r1
r2
)
Nonsingular for any active set (any AF )
Always feasible (no Phase 1)
Can use LUSOL, MA57, UMFPACK, . . . without change
Can use Hanh’s block-LU updates without change
Banff 2009 17/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUR
minx ,y
cTx + 12xTHx + 1
2δ‖x‖22 + 1
2µ‖y‖22
Ax + µy = b, l ≤ x ≤ u
(HF + δI AT
F
AF −µI
)(∆xF
∆y
)≈(
r1
r2
)
Nonsingular for any active set (any AF )
Always feasible (no Phase 1)
Can use LUSOL, MA57, UMFPACK, . . . without change
Can use Hanh’s block-LU updates without change
Banff 2009 17/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUR
minx ,y
cTx + 12xTHx + 1
2δ‖x‖22 + 1
2µ‖y‖22
Ax + µy = b, l ≤ x ≤ u
(HF + δI AT
F
AF −µI
)(∆xF
∆y
)≈(
r1
r2
)
Nonsingular for any active set (any AF )
Always feasible (no Phase 1)
Can use LUSOL, MA57, UMFPACK, . . . without change
Can use Hanh’s block-LU updates without change
Banff 2009 17/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUR
minx ,y
cTx + 12xTHx + 1
2δ‖x‖22 + 1
2µ‖y‖22
Ax + µy = b, l ≤ x ≤ u
(HF + δI AT
F
AF −µI
)(∆xF
∆y
)≈(
r1
r2
)
Nonsingular for any active set (any AF )
Always feasible (no Phase 1)
Can use LUSOL, MA57, UMFPACK, . . . without change
Can use Hanh’s block-LU updates without change
Banff 2009 17/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUR
minx ,y
cTx + 12xTHx + 1
2δ‖x‖22 + 1
2µ‖y‖22
Ax + µy = b, l ≤ x ≤ u
(HF + δI AT
F
AF −µI
)(∆xF
∆y
)≈(
r1
r2
)
Nonsingular for any active set (any AF )
Always feasible (no Phase 1)
Can use LUSOL, MA57, UMFPACK, . . . without change
Can use Hanh’s block-LU updates without change
Banff 2009 17/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Strategy
minx ,y
cTx + 12xTHx + 1
2δ‖x‖22 + 1
2µ‖y‖22
Ax + µy = b, l ≤ x ≤ u
Matlab implementation
Scale problem
Get square AF from PATQ = LU [L,U,P] = lu(...)
Solve with δ, µ = 10−6, 10−8, 10−10, 10−12 opttol =√δ
Unscale
Solve with δ, µ = 10−8, 10−10, 10−12 opttol =√δ
Exit if small relative change in obj
Banff 2009 18/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Strategy
minx ,y
cTx + 12xTHx + 1
2δ‖x‖22 + 1
2µ‖y‖22
Ax + µy = b, l ≤ x ≤ u
Matlab implementation
Scale problem
Get square AF from PATQ = LU [L,U,P] = lu(...)
Solve with δ, µ = 10−6, 10−8, 10−10, 10−12 opttol =√δ
Unscale
Solve with δ, µ = 10−8, 10−10, 10−12 opttol =√δ
Exit if small relative change in obj
Banff 2009 18/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Strategy
minx ,y
cTx + 12xTHx + 1
2δ‖x‖22 + 1
2µ‖y‖22
Ax + µy = b, l ≤ x ≤ u
Matlab implementation
Scale problem
Get square AF from PATQ = LU [L,U,P] = lu(...)
Solve with δ, µ = 10−6, 10−8, 10−10, 10−12 opttol =√δ
Unscale
Solve with δ, µ = 10−8, 10−10, 10−12 opttol =√δ
Exit if small relative change in obj
Banff 2009 18/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Strategy
minx ,y
cTx + 12xTHx + 1
2δ‖x‖22 + 1
2µ‖y‖22
Ax + µy = b, l ≤ x ≤ u
Matlab implementation
Scale problem
Get square AF from PATQ = LU [L,U,P] = lu(...)
Solve with δ, µ = 10−6, 10−8, 10−10, 10−12 opttol =√δ
Unscale
Solve with δ, µ = 10−8, 10−10, 10−12 opttol =√δ
Exit if small relative change in obj
Banff 2009 18/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Strategy
minx ,y
cTx + 12xTHx + 1
2δ‖x‖22 + 1
2µ‖y‖22
Ax + µy = b, l ≤ x ≤ u
Matlab implementation
Scale problem
Get square AF from PATQ = LU [L,U,P] = lu(...)
Solve with δ, µ = 10−6, 10−8, 10−10, 10−12 opttol =√δ
Unscale
Solve with δ, µ = 10−8, 10−10, 10−12 opttol =√δ
Exit if small relative change in obj
Banff 2009 18/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Strategy
minx ,y
cTx + 12xTHx + 1
2δ‖x‖22 + 1
2µ‖y‖22
Ax + µy = b, l ≤ x ≤ u
Matlab implementation
Scale problem
Get square AF from PATQ = LU [L,U,P] = lu(...)
Solve with δ, µ = 10−6, 10−8, 10−10, 10−12 opttol =√δ
Unscale
Solve with δ, µ = 10−8, 10−10, 10−12 opttol =√δ
Exit if small relative change in obj
Banff 2009 18/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Numerical Results
Banff 2009 19/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Accuracy of solutions
10−16
10−14
10−12
10−10
10−8
10−6
10−4
10−16
10−14
10−12
10−10
10−8
10−6
10−4
10−2
100
Primal infeasibility
Opt
imal
ity
90 Convex QPs from the Maros Meszaros Test Set
Residuals for 90 Meszaros QP test problems
Banff 2009 20/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
KKT factorizations
0 500 1000 1500 2000 2500 3000 3500 4000 4500 50000
20
40
60
80
100
120
140
160
180
Number of variables
Tot
al n
umbe
r of
fact
oriz
atio
ns
90 Convex QPs from the Maros Meszaros Test Set
No. of factorizations on 90 Meszaros QP test problems
Banff 2009 21/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Infeasible problems
0 5 10 15 20 25 300
0.5
1
1.5
2
2.5
3
3.5
4x 10
4
Problem
Itera
tions
No. of iterations on 28 infeasible LP test problems (Netlib lpi xxx)
• µ↘ 10−12 • µ ≡ 1(min cTx + 1
2xTHx + 1
2δxTx + 1
2µyTy Ax + µy = b, . . . )
Banff 2009 22/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Conclusions
Banff 2009 23/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUR Pros and Cons
minx,y
cTx + 12xTHx + 1
2δxTx + 1
2µyTy Ax + µy = b, l ≤ x ≤ u
Advantages
Can start from any active set(hence good for warm starts in SNOPT)
Can use any black-box LDLT or LU solver(preferably separate L and U solves and no refinement)Black-box solver should never report singularity(hence no KKT repair)Simple step-length procedure(l ≤ x ≤ u always; no degeneracy troubles)
Disadvantages
Large regularization can increase number of iterationsTiny regularization risks ill-conditioned KKT(but so far so good)
Banff 2009 24/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUR Pros and Cons
minx,y
cTx + 12xTHx + 1
2δxTx + 1
2µyTy Ax + µy = b, l ≤ x ≤ u
Advantages
Can start from any active set(hence good for warm starts in SNOPT)Can use any black-box LDLT or LU solver(preferably separate L and U solves and no refinement)
Black-box solver should never report singularity(hence no KKT repair)Simple step-length procedure(l ≤ x ≤ u always; no degeneracy troubles)
Disadvantages
Large regularization can increase number of iterationsTiny regularization risks ill-conditioned KKT(but so far so good)
Banff 2009 24/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUR Pros and Cons
minx,y
cTx + 12xTHx + 1
2δxTx + 1
2µyTy Ax + µy = b, l ≤ x ≤ u
Advantages
Can start from any active set(hence good for warm starts in SNOPT)Can use any black-box LDLT or LU solver(preferably separate L and U solves and no refinement)Black-box solver should never report singularity(hence no KKT repair)
Simple step-length procedure(l ≤ x ≤ u always; no degeneracy troubles)
Disadvantages
Large regularization can increase number of iterationsTiny regularization risks ill-conditioned KKT(but so far so good)
Banff 2009 24/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUR Pros and Cons
minx,y
cTx + 12xTHx + 1
2δxTx + 1
2µyTy Ax + µy = b, l ≤ x ≤ u
Advantages
Can start from any active set(hence good for warm starts in SNOPT)Can use any black-box LDLT or LU solver(preferably separate L and U solves and no refinement)Black-box solver should never report singularity(hence no KKT repair)Simple step-length procedure(l ≤ x ≤ u always; no degeneracy troubles)
Disadvantages
Large regularization can increase number of iterationsTiny regularization risks ill-conditioned KKT(but so far so good)
Banff 2009 24/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUR Pros and Cons
minx,y
cTx + 12xTHx + 1
2δxTx + 1
2µyTy Ax + µy = b, l ≤ x ≤ u
Advantages
Can start from any active set(hence good for warm starts in SNOPT)Can use any black-box LDLT or LU solver(preferably separate L and U solves and no refinement)Black-box solver should never report singularity(hence no KKT repair)Simple step-length procedure(l ≤ x ≤ u always; no degeneracy troubles)
Disadvantages
Large regularization can increase number of iterations
Tiny regularization risks ill-conditioned KKT(but so far so good)
Banff 2009 24/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
QPBLUR Pros and Cons
minx,y
cTx + 12xTHx + 1
2δxTx + 1
2µyTy Ax + µy = b, l ≤ x ≤ u
Advantages
Can start from any active set(hence good for warm starts in SNOPT)Can use any black-box LDLT or LU solver(preferably separate L and U solves and no refinement)Black-box solver should never report singularity(hence no KKT repair)Simple step-length procedure(l ≤ x ≤ u always; no degeneracy troubles)
Disadvantages
Large regularization can increase number of iterationsTiny regularization risks ill-conditioned KKT(but so far so good)
Banff 2009 24/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
References
A. Altman and J. Gondzio (1999). Regularized symmetric indefinite systems ininterior point methods for linear and quadratic optimization,Optimization Methods and Software 11, 11–12
P. E. Gill (2007). Notes on an inertia-controlling active-set QP solver.
P. E. Gill, W. Murray, and M. A. Saunders (2005). SNOPT: An SQP algorithm forlarge-scale constrained optimization, SIAM Review 47(1).
P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright (1990). A Schur-com-plement method for sparse quadratic programming, in M. G. Coxand S. Hammarling (eds.), Reliable Numerical Computation, OxfordUniversity Press, 113–138.
Hanh Huynh (2008). A Large-scale Quadratic Programming Solver Based onBlock-LU Updates of the KKT System, PhD thesis, Stanford U.
M. A. Saunders (1996). Cholesky-based methods for sparse least squares: Thebenefits of regularization, in L. Adams and J. L. Nazareth (eds.),Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM,Philadelphia, 92–100.
M. A. Saunders and J. A. Tomlin (1996). Solving regularized linear programs usingbarrier methods and KKT systems, Report SOL 96-4, Stanford U.
Banff 2009 25/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Related reference
M. P. Friedlander and P. Tseng (2007). Exact regularization of convex programs,SIAM J. Optim. 18(4) 1326–1350.
Further results (and F90 implementation)SIAM Applied Linear Algebra meeting
Monterey, CA, Oct 2009
Warmest thanks to organizers Howard, Chen, and Dominik
Banff 2009 26/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Related reference
M. P. Friedlander and P. Tseng (2007). Exact regularization of convex programs,SIAM J. Optim. 18(4) 1326–1350.
Further results (and F90 implementation)SIAM Applied Linear Algebra meeting
Monterey, CA, Oct 2009
Warmest thanks to organizers Howard, Chen, and Dominik
Banff 2009 26/26
Abstract Motivation SQOPT QPBLU Regularization QPBLUR Results Conclusions
Related reference
M. P. Friedlander and P. Tseng (2007). Exact regularization of convex programs,SIAM J. Optim. 18(4) 1326–1350.
Further results (and F90 implementation)SIAM Applied Linear Algebra meeting
Monterey, CA, Oct 2009
Warmest thanks to organizers Howard, Chen, and Dominik
Banff 2009 26/26