Applications of SDP least-squares in finance and combinatoricsIEEE Trans. of Information...

Preview:

Citation preview

Applications of SDP least-squaresin finance and combinatorics

Jerome MALICK

CNRS, Lab. J. Kuntzmann, Grenoble

CORE math. prog. seminar – 11 March 2008

Outline

1 Semidefinite programming and least-squares

2 Appli 1: constructing covariance matrices in finance

3 Appli 2: relaxing binary quadratic problems

4 Appli 3: regularizing methods for solving SDP

Outline

1 Semidefinite programming and least-squares

2 Appli 1: constructing covariance matrices in finance

3 Appli 2: relaxing binary quadratic problems

4 Appli 3: regularizing methods for solving SDP

Semidefinite programming

S+n

The cone of positive semidef. matrices

S+n = {A ! Sn, "x ! E, x!Ax ! 0}

= {A ! Sn, !min(A) ! 0}

Semidefinite programming

S+n

The cone of positive semidef. matrices

S+n = {A ! Sn, "x ! E, x!Ax ! 0}

= {A ! Sn, !min(A) ! 0}

Standard semidefinite programming

(SDP)

!"

#

min #C, X$AX = bX " 0

R. Saigal, L. Vandenberghe, and H. WolkowiczHandbook of Semidefinite ProgrammingKluwer, 2000

Semidefinite least-squares

X!

C

AX = b

S+n

(SDLS)

!"

#

min %X & C%2

AX = bX " 0

SDLS problems

Example: compute the nearest correlation matrix!"

#

min %X & C%2

diag(X) = eX " 0

SDLS problems

Example: compute the nearest correlation matrix!"

#

min %X & C%2

diag(X) = eX " 0

In general: general form tackled!$$"

$$#

min #C", X$ + %W (X & C)W%2

AX = bA"X # b"

X " 0

SDLS problems

Example: compute the nearest correlation matrix!"

#

min %X & C%2

diag(X) = eX " 0

In general: general form tackled!$$"

$$#

min #C", X$ + %W (X & C)W%2

AX = bA"X # b"

X " 0

Question: How to solve semidefinite least-squares ?

Basic fact: Project onto the cone

S+n

C

With no a!ne constraint, the problem isto project onto the cone

%min %X & C%2

X " 0

Basic fact: Project onto the cone

S+n

C

With no a!ne constraint, the problem isto project onto the cone

%min %X & C%2

X " 0

has well-known easy-to-compute solution

C = P

&!1 . . .

!n

'P!

Basic fact: Project onto the cone

S+n

C

With no a!ne constraint, the problem isto project onto the cone

%min %X & C%2

X " 0

has well-known easy-to-compute solution

C = P

&!1 . . .

!n

'P!

PS+n(C) = P

(

)max{0,!1}

. . .max{0,!n}

*

+P!

1st idea: alternating projections

C

AX = b

S+n

Projection onto the intersection oftwo (convex) sets

&'&'&' alternating projections+ Dykstra correction

N. HighamComputing nearest symmetric correlationmatrix - a problem from financeIMA Journal of Numerical Analysis, 2002.

Next ideas

2nd idea: reformulate SDLS as a linear conic program

(SDLS) ()

!"

#

min t%X & C% # tAX = b, X " 0

with usual conic solvers (Sedumi, SDPT3,...) &'&'&' no good results

Next ideas

2nd idea: reformulate SDLS as a linear conic program

(SDLS) ()

!"

#

min t%X & C% # tAX = b, X " 0

with usual conic solvers (Sedumi, SDPT3,...) &'&'&' no good results

3rd idea: interior-point method

K.C. Toh, R.H. Tutuncu, and M.J. ToddInexact primal-dual path-following algorithms for a special class of convex quadraticSDP and related problemsPacific Journal of Optimization, 2006

Next ideas

2nd idea: reformulate SDLS as a linear conic program

(SDLS) ()

!"

#

min t%X & C% # tAX = b, X " 0

with usual conic solvers (Sedumi, SDPT3,...) &'&'&' no good results

3rd idea: interior-point method

K.C. Toh, R.H. Tutuncu, and M.J. ToddInexact primal-dual path-following algorithms for a special class of convex quadraticSDP and related problemsPacific Journal of Optimization, 2006

4th idea: by duality (Lagrangian duality + dual resolution)

J. MalickA dual approach to semidefinite least-squares problemsSIAM Journal on Matrix Analysis and Applications, 2004

Lagrangian duality

Apply standard machinery:

L(X ;!) = %X & C%2 & !!(AX & b)

"(!) =%

min L(X ;!)X " 0

Lagrangian duality

Apply standard machinery:

L(X ;!) = %X & C%2 & !!(AX & b)

"(!) =%

min L(X ;!)X " 0

(dual)%

max "(!)! ! Rm

dual problem is convex and di"erentiable !

Solving the dual problem

Apply standard algorithms:

(dual)%

max "(!)! ! Rm

variable metric descent

!k+1 = !k & #kWk*"(!k)

Solving the dual problem

Apply standard algorithms:

(dual)%

max "(!)! ! Rm

variable metric descent

!k+1 = !k & #kWk*"(!k)

1 Steepest descent = alternative projections

Wk = Id and #k = 1

Solving the dual problem

Apply standard algorithms:

(dual)%

max "(!)! ! Rm

variable metric descent

!k+1 = !k & #kWk*"(!k)

1 Steepest descent = alternative projections

Wk = Id and #k = 1

2 Quasi-Newton

Wk = bfgs and #k well-chosen

Solving the dual problem

Apply standard algorithms:

(dual)%

max "(!)! ! Rm

variable metric descent

!k+1 = !k & #kWk*"(!k)

1 Steepest descent = alternative projections

Wk = Id and #k = 1

2 Quasi-Newton

Wk = bfgs and #k well-chosen

3 Generalized Newton

Wk = [Hk]#1 with Hk ! $2c "(!k)

Numerical example

Compute the nearest correlation matrix (for random C of various sizes)!"

#

min %X & C%2

diag(X) = eX " 0

Stop if relative error

% diag(X) & e%+n

# 10#7

Numerical example

Compute the nearest correlation matrix (for random C of various sizes)!"

#

min %X & C%2

diag(X) = eX " 0

Stop if relative error

% diag(X) & e%+n

# 10#7

size cpu time nb iter.100 0.2 s 14500 16.3 s 171000 2 min 05 s 182000 17 min 41 s 193000 1h 08 min 14 s 19

Conclusion: SDP vs. SDLS

Semidefinite programming

(SDP)

!"

#

min #C, X$AX = bX " 0

!"

#

min b!!A$!& C = ZZ " 0

(SDLS)

!"

#

min %X & C%2

AX = bX " 0

%max "(!)! ! Rm

Conclusion: SDP vs. SDLS

Semidefinite programming

(SDP)

!"

#

min #C, X$AX = bX " 0

!"

#

min b!!A$!& C = ZZ " 0

(SDLS)

!"

#

min %X & C%2

AX = bX " 0

%max "(!)! ! Rm

Bottom-line:&'&'&' SDLS are easier than SDP !

(geometry + dual problem di"erentiable)

Conclusion: SDP vs. SDLS

Semidefinite programming

(SDP)

!"

#

min #C, X$AX = bX " 0

!"

#

min b!!A$!& C = ZZ " 0

(SDLS)

!"

#

min %X & C%2

AX = bX " 0

%max "(!)! ! Rm

Bottom-line:&'&'&' SDLS are easier than SDP !

(geometry + dual problem di"erentiable)

SDLS modelling ? much less than SDP, but

&'&'&' in next applications, SDLS problems naturally appear...

Outline

1 Semidefinite programming and least-squares

2 Appli 1: constructing covariance matrices in finance

3 Appli 2: relaxing binary quadratic problems

4 Appli 3: regularizing methods for solving SDP

Estimating covariance matrices

Random variable X ! Rn with unknown moments(Eg: returns of n assets)

Observations of X : xt ! Rn for t = 1, . . . , T(Eg: historical charts of the assets)

Empirical estimations

µ =1T

T,

t=1

xt-! =

1T

T,

t=1

(xt & µ)(xt & µ)! " 0

Beyond statistics:

!"!"!" in practice, e! has weaknesses(Eg: incomplete data, choice of T , bad conditioning,...)

!"!"!" how to add exogenous (nonhistorical) information ?(Eg: set values to entries, impose structure...)

Covariance matrices in finance

Goal: to construct “good” covariance matrices with special structure,from the empirical estimation -!

Many uses in quantitative finance, eg:

Portfolio selection

Correlation Stress testing

Convexification of BGM model

Aggregation to a global risk model from local models andcross-market estimates

RaisePartner Corp.

http://www.raisepartner.com

Example: portfolio selection

Find the portfolio x = (x1, . . . , xn) with minimum risk, as the solution of

!"

#

min x!!xx!% ! r0

x ! "

Example: portfolio selection

Find the portfolio x = (x1, . . . , xn) with minimum risk, as the solution of

!"

#

min x!!xx!% ! r0

x ! "

-! (, !) underestimates small risks&' decision not reliable, portfolio unstable

Example: portfolio selection

Find the portfolio x = (x1, . . . , xn) with minimum risk, as the solution of

!"

#

min x!!xx!% ! r0

x ! "

-! (, !) underestimates small risks&' decision not reliable, portfolio unstable

portfolioe! !data

statistics optimization(SDLS)

calibration

Example: portfolio selection

Find the portfolio x = (x1, . . . , xn) with minimum risk, as the solution of

!"

#

min x!!xx!% ! r0

x ! "

-! (, !) underestimates small risks&' decision not reliable, portfolio unstable

portfolioe! !data

statistics optimization(SDLS)

calibration

Preliminary calibration: in selection problem, use ! the solution of!$$"

$$#

min %X & -!%2

X & &In " 0trace(X) = trace(-!)#Ai, X$ ! [ai, bi]

Simple illustration

Outline

1 Semidefinite programming and least-squares

2 Appli 1: constructing covariance matrices in finance

3 Appli 2: relaxing binary quadratic problems

4 Appli 3: regularizing methods for solving SDP

Bounds, relaxations

Bounding is a basic tool of combinatorial optimizationIf we can’t %

max (easy objective function)x ! (di#cult constraints)

we are content with an upper bound, computing after relaxation

Bounds, relaxations

Bounding is a basic tool of combinatorial optimizationIf we can’t %

max (easy objective function)x ! (di#cult constraints)

we are content with an upper bound, computing after relaxation

Enlarging the feasible set &'&'&'%

max (easy fonction)x ! (easy constraints)

Example: max-cut

G = (V, E) undirected graphQ = (&wij)ij weight-matrix

finding the cut of maximal weight

=%

max x!Q xx ! {&1, 1}n

&'&'&' NP hard cut

4

w45

G

w12

w141

w15

w25

w35

25

3

&'&'&' Classical relaxations

linear relaxation: xi ! {&1, 1}'xi ! [&1, 1]SDP relaxation...

many others !

SDP relaxation

(max-cut)%

max x!Q xx ! {&1, 1}n

X = xx!

L. LovaszOn the Shannon capacity of a graphIEEE Trans. of Information Theory, 1979

M. X. Goemans and D. P. WilliamsonImproved approximation algorithms formaximum cut and satisfiability problems usingsemidefinite programmingJournal of ACM, 1995

(max-cut)

!$$"

$$#

max #Q, X$Xii = 1X " 0rang X = 1

SDP relaxation

(max-cut)%

max x!Q xx ! {&1, 1}n

X = xx!

L. LovaszOn the Shannon capacity of a graphIEEE Trans. of Information Theory, 1979

M. X. Goemans and D. P. WilliamsonImproved approximation algorithms formaximum cut and satisfiability problems usingsemidefinite programmingJournal of ACM, 1995

(max-cut)

!$$"

$$#

max #Q, X$Xii = 1X " 0rang X = 1

(SDP relax.)

!"

#

max #Q, X$Xii = 1X " 0

Spherical constraint

For all X " 0 satisfying Xii = 1,we have

%X% # n

nX " 0diag X = e

Spherical constraint

For all X " 0 satisfying Xii = 1,we have

%X% # n

%X% = n () rang X = 1“spherical constraint”

nX " 0diag X = e

X has rank 1

Dualizing spherical constraint

(max-cut)

!$$"

$$#

max #Q, X$Xii = 1 for all iX " 0%X%2 & n2 = 0

Dualizing spherical constraint

(max-cut)

!$$"

$$#

max #Q, X$Xii = 1 for all iX " 0%X%2 & n2 = 0

L(X ;&) = #Q, X$ & &(%X%2 & n2) Lagrangian

Dualizing spherical constraint

(max-cut)

!$$"

$$#

max #Q, X$Xii = 1 for all iX " 0%X%2 & n2 = 0

L(X ;&) = #Q, X$ & &(%X%2 & n2) Lagrangian

"(&) =

!"

#

max L(X ;&)Xii = 1 for all iX " 0

dual function

Dualizing spherical constraint

(max-cut)

!$$"

$$#

max #Q, X$Xii = 1 for all iX " 0%X%2 & n2 = 0

L(X ;&) = #Q, X$ & &(%X%2 & n2) Lagrangian

"(&) =

!"

#

max L(X ;&)Xii = 1 for all iX " 0

dual function

"(&) = Cste" & &

!"

#

min 12%X & Q/&%2

Xii = 1 for all iX " 0

&'&'&' SDLS(if & > 0)

Bounds and duality gap

What does it yield ?

&'&'&' Weak duality: each "(&) gives an upper bound for (max-cut)

Bounds and duality gap

What does it yield ?

&'&'&' Weak duality: each "(&) gives an upper bound for (max-cut)

&'&'&' No gap (!): in theory, bounds as tight as we want !?

"(&) &' val(max-cut) when &' &-

&'&'&' In practice:

Bounds and duality gap

What does it yield ?

&'&'&' Weak duality: each "(&) gives an upper bound for (max-cut)

&'&'&' No gap (!): in theory, bounds as tight as we want !?

"(&) &' val(max-cut) when &' &-

&'&'&' In practice:if ! > 0, compute "(!) with SDLS solvers

Bounds and duality gap

What does it yield ?

&'&'&' Weak duality: each "(&) gives an upper bound for (max-cut)

&'&'&' No gap (!): in theory, bounds as tight as we want !?

"(&) &' val(max-cut) when &' &-

&'&'&' In practice:if ! > 0, compute "(!) with SDLS solvers

if ! = 0, there holds

"(0) =

8<

:

max #Q, X$Xii = 1 for all iX " 0

so compute "(0) with SDP solvers

Bounds and duality gap

What does it yield ?

&'&'&' Weak duality: each "(&) gives an upper bound for (max-cut)

&'&'&' No gap (!): in theory, bounds as tight as we want !?

"(&) &' val(max-cut) when &' &-

&'&'&' In practice:if ! > 0, compute "(!) with SDLS solvers

if ! = 0, there holds

"(0) =

8<

:

max #Q, X$Xii = 1 for all iX " 0

so compute "(0) with SDP solvers

if ! < 0 large, computing "(!) is NP hard !

No duality gap

"

SDP relax.

max-cut

"(!)

SDLSNP hard

0 !

SDP

Numerical example

For a random max-cut (500 nodes and 70% density) compute SDLSbounds "(&) for several & ! [0, 1]

& error iters time1 94% 9 28 s

0.1 52% 16 53 s0.01 10% 33 106 s0.001 1% 89 275 s

with a guarantee"(&) & "(0)"(&)

# error

Numerical example

For a random max-cut (500 nodes and 70% density) compute SDLSbounds "(&) for several & ! [0, 1]

& error iters time1 94% 9 28 s

0.1 52% 16 53 s0.01 10% 33 106 s0.001 1% 89 275 s

with a guarantee"(&) & "(0)"(&)

# error

&'&'&' when & . 0 :

SDLS bounds get tighter (but always less than SDP one !)

SDLS problems become more di!cult

Conclusion: SDLS bounds

Spherical constraint:

&'&'&' new formulation of rank-one constraint

&'&'&' new interpretation of SDP relaxation

&'&'&' opens the way for SDLS solvers

&'&'&' for any binary quadratic problem

J. MalickSpherical constraint in Boolean quadratic problemsJournal of Global Optimization, 2007

Conclusion: SDLS bounds

Spherical constraint:

&'&'&' new formulation of rank-one constraint

&'&'&' new interpretation of SDP relaxation

&'&'&' opens the way for SDLS solvers

&'&'&' for any binary quadratic problem

J. MalickSpherical constraint in Boolean quadratic problemsJournal of Global Optimization, 2007

Current research:

&'&'&' more complete numerical experiments

&'&'&' good balance between accuracy and speed of computation ?

&'&'&' di!cult combinatorial problems (k-cluster, QAP)

Outline

1 Semidefinite programming and least-squares

2 Appli 1: constructing covariance matrices in finance

3 Appli 2: relaxing binary quadratic problems

4 Appli 3: regularizing methods for solving SDP

How to solve general SDP

1 primal-dual interior point methods

2 spectral bundle methods, !min(X)

3 with low-rank methods, X = RR! with R ! Rn%r

4 and others ! Sorry for not citing them...

How to solve general SDP

1 primal-dual interior point methods

2 spectral bundle methods, !min(X)

3 with low-rank methods, X = RR! with R ! Rn%r

4 and others ! Sorry for not citing them...

5 but still need for something else ! Real-life applications (eg: incombinatorics) ask for bigger sizes or quicker solvers

&'&'&' Idea: (primal) proximal algorithm

How to solve general SDP

1 primal-dual interior point methods

2 spectral bundle methods, !min(X)

3 with low-rank methods, X = RR! with R ! Rn%r

4 and others ! Sorry for not citing them...

5 but still need for something else ! Real-life applications (eg: incombinatorics) ask for bigger sizes or quicker solvers

&'&'&' Idea: (primal) proximal algorithm

&'&'&' Target problems: n not too big, m very large

n # 1000, m up to 100, 000

How to solve general SDP

1 primal-dual interior point methods

2 spectral bundle methods, !min(X)

3 with low-rank methods, X = RR! with R ! Rn%r

4 and others ! Sorry for not citing them...

5 but still need for something else ! Real-life applications (eg: incombinatorics) ask for bigger sizes or quicker solvers

&'&'&' Idea: (primal) proximal algorithm

&'&'&' Target problems: n not too big, m very large

n # 1000, m up to 100, 000

&'&'&' Joint work F. Rendl, J. Povh and A. Wiegele

Proximal approach on paper

!"

#

min #C, X$AX = bX " 0

Proximal approach on paper

!"

#

min #C, X$AX = bX " 0 ()

!$$$"

$$$#

min #C, X$ +12 t

%X & Y %2

Y ! Sn

AX = bX " 0

Proximal approach on paper

!"

#

min #C, X$AX = bX " 0 ()

!$$$"

$$$#

min #C, X$ +12 t

%X & Y %2

Y ! Sn

AX = bX " 0

Introduce the convex and di"erentiable (!) function

Ft(Y ) =

!$"

$#

min #C, X$ +12 t

%X & Y %2

AX = bX " 0

Proximal approach on paper

!"

#

min #C, X$AX = bX " 0 ()

!$$$"

$$$#

min #C, X$ +12 t

%X & Y %2

Y ! Sn

AX = bX " 0

Introduce the convex and di"erentiable (!) function

Ft(Y ) =

!$"

$#

min #C, X$ +12 t

%X & Y %2

AX = bX " 0

Hence

!"

#

min #C, X$AX = bX " 0

()%

min Ft(Y )Y ! Sn

Proximal point in practice

General fact:

gradient step on Ft

() fixed point iteration on prox point() Yk+1 = prox(Yk)

where

prox(Y ) =

!$"

$#

argmin #C, X$ +12 t

%X & Y %2

AX = bX " 0

Proximal point in practice

General fact:

gradient step on Ft

() fixed point iteration on prox point() Yk+1 = prox(Yk)

where

prox(Y ) =

!$"

$#

argmin #C, X$ +12 t

%X & Y %2

AX = bX " 0

Practical implementation requires

an algorithm to compute prox(Yk) &'&'&' SDLS

a rule to stop this algorithm (in practice Yk+1 , prox(Yk))

Solving the outer problemAlgorithm (General proximal algorithm for SDP)

Repeat until %Z + A$y & C% small:Repeat until %b &AX% small enough:

Compute X = t(A$y & C + Y/t)+, Z = &(A$y & C + Y/t)#Set g = b &AXUpdate y / y + #Wg with appropriate # and W

end (inner repeat)Y / X

end (outer repeat)

Solving the outer problemAlgorithm (General proximal algorithm for SDP)

Repeat until %Z + A$y & C% small:Repeat until %b &AX% small enough:

Compute X = t(A$y & C + Y/t)+, Z = &(A$y & C + Y/t)#Set g = b &AXUpdate y / y + #Wg with appropriate # and W

end (inner repeat)Y / X

end (outer repeat)

Under some technical assumptions:

any accumulation point of generated sequence is solution of SDP

Solving the outer problemAlgorithm (General proximal algorithm for SDP)

Repeat until %Z + A$y & C% small:Repeat until %b &AX% small enough:

Compute X = t(A$y & C + Y/t)+, Z = &(A$y & C + Y/t)#Set g = b &AXUpdate y / y + #Wg with appropriate # and W

end (inner repeat)Y / X

end (outer repeat)

Under some technical assumptions:

any accumulation point of generated sequence is solution of SDP

Connexion with boundary points: If A is surjective,W = [AA$]#1 # = 1/t

then: the present algorithm = the boundary point method

J. Malick, J. Povh, F. Rendl and A. WiegeleBoundary point method to solve semidefinite programsComputing 2006

In practiceregularization algorithms:

projections methods 0 interior points

Complementary and semidefiniteness are ensured throughout, whileprimal dual feasibility are obtained asymptotically

In practiceregularization algorithms:

projections methods 0 interior points

Complementary and semidefiniteness are ensured throughout, whileprimal dual feasibility are obtained asymptotically

After testing various prototypes, we chose: one single inner iteration,W = [AA$]#1, # = 1/t, update strategy of parameters...

Algorithm (Final algorithm)

Repeat until ' <tolerance:Compute the solution y of AA$y = A(C & Z) + (b &AY )/tCompute S = Y/t + A$y & C, Z = &S# and X = S+

Update Y / X , compute ' and update t

In practiceregularization algorithms:

projections methods 0 interior points

Complementary and semidefiniteness are ensured throughout, whileprimal dual feasibility are obtained asymptotically

After testing various prototypes, we chose: one single inner iteration,W = [AA$]#1, # = 1/t, update strategy of parameters...

Algorithm (Final algorithm)

Repeat until ' <tolerance:Compute the solution y of AA$y = A(C & Z) + (b &AY )/tCompute S = Y/t + A$y & C, Z = &S# and X = S+

Update Y / X , compute ' and update t

AA$ (and its Choleski factorization) computed one single time !

Numerical experiments

&'&'&' 1st test: Random instances with large m

(Generator is online, so data reproducible)

http://bipop.inrialpes.fr/people/malick/software

(SDP)

!"

#

min #C, X$AX = bX " 0

(dual)

!"

#

max b!yC &A$y = ZZ " 0

Numerical experiments

&'&'&' 1st test: Random instances with large m

(Generator is online, so data reproducible)

http://bipop.inrialpes.fr/people/malick/software

(SDP)

!"

#

min #C, X$AX = bX " 0

(dual)

!"

#

max b!yC &A$y = ZZ " 0

&'&'&' Comparison with 2 strong codes - on a standard machine

K.C. TohSolving large scale semidefinite programs via an iterative solver on the augmented systemsSIAM Journal on Optimization, 2004.

S. Burer and R.D.C MonteiroA nonlinear programming algorithm for solving SDP via low-rank factorizationMathematical Programming, 2003

Random instances

Stopping tolerance for our regularization method:

error := max%%AX & b%1 + %b% ,

%C & Z &A$y%1 + %C%

.# 10#7

proxn m time

400 30000 153500 30000 202500 40000 217700 50000 592700 70000 534

Random instances

Stopping tolerance for our regularization method:

error := max%%AX & b%1 + %b% ,

%C & Z &A$y%1 + %C%

.# 10#7

proxn m time

400 30000 153500 30000 202500 40000 217700 50000 592700 70000 534

Toh SDPLRtime error time error1000 0.6e-7 1880 7.3e-61309 0.5e-7 2165 7.8e-61668 0.4e-7 3600 8.8e-62696 0.8e-7 3600 3.4e-54065 0.3e-7 3600 6.3e-5

Random instances

Stopping tolerance for our regularization method:

error := max%%AX & b%1 + %b% ,

%C & Z &A$y%1 + %C%

.# 10#7

proxn m time

400 30000 153500 30000 202500 40000 217700 50000 592700 70000 534

Toh SDPLRtime error time error1000 0.6e-7 1880 7.3e-61309 0.5e-7 2165 7.8e-61668 0.4e-7 3600 8.8e-62696 0.8e-7 3600 3.4e-54065 0.3e-7 3600 6.3e-5

Note: If the factorization given, larger instances (n = 1000, m = 150, 000)in less than one hour

Example in combinatorics:Lovasz theta number

Let G be a graph (and G its complementary)

((G) :=

!$$"

$$#

max #1n%n, X$Xij = 0, " i, j ! E(G)traceX = 1X " 0

It holds )(G) # ((G) # *(G)

The clique number )(G) and the chromatic number *(G) are“impossible” to compute and even hard to approximate

L. LovaszOn the Shannon capacity of a graphIEEE Trans. Inform. Theory, 1979

Computing Lovasz theta number

Challenge: Lovasz number of certain graphs of the DIMACScollection not computed so far...

Computing Lovasz theta number

Challenge: Lovasz number of certain graphs of the DIMACScollection not computed so far...

Big help: for Lovasz theta number, AA$ is diagonal !

Computing Lovasz theta number

Challenge: Lovasz number of certain graphs of the DIMACScollection not computed so far...

Big help: for Lovasz theta number, AA$ is diagonal !

Results: from DIMACS library

graph name |V (G)| |E(G)| ((G) |E(G)| ((G)brock400-1 400 20077 39.702 59723 10.388keller5 776 74710 31.000 225990 31.000brock800-1 800 112095 42.222 207505 19.233p-hat500-1 500 93181 13.074 31569 58.036p-hat1000-3 1000 127754 84.80 371746 18.23

Conclusion: regularization method

Our new method for solving SDP

&'&'&' follows a very general approach of convex analysis

&'&'&' takes an opposite direction to Interior Points

&'&'&' compares favorably with other solvers for some problems

J. Malick, J. Povh, F. Rendl and A. WiegeleRegularization methods for semidefinite programmingSubmitted, 2007

Conclusion: regularization method

Our new method for solving SDP

&'&'&' follows a very general approach of convex analysis

&'&'&' takes an opposite direction to Interior Points

&'&'&' compares favorably with other solvers for some problems

J. Malick, J. Povh, F. Rendl and A. WiegeleRegularization methods for semidefinite programmingSubmitted, 2007

More work:

&'&'&' to understand the convergence (' tuning of parameters)

&'&'&' to enlarge the range of tackled problems

&'&'&' to handle inequalities and others cones

(Final) conclusion: SDLS and next

Semidefinite least-squares:to project onto subsets of SDP cone (SDLS)

!"

#

min %X & C%2

AX = bX " 0

(Final) conclusion: SDLS and next

Semidefinite least-squares:to project onto subsets of SDP cone (SDLS)

!"

#

min %X & C%2

AX = bX " 0

&'&'&' E!cient solvers for SDLS

(Final) conclusion: SDLS and next

Semidefinite least-squares:to project onto subsets of SDP cone (SDLS)

!"

#

min %X & C%2

AX = bX " 0

&'&'&' E!cient solvers for SDLS

&'&'&' Besides standard SDP tools, alternative techniques

(Final) conclusion: SDLS and next

Semidefinite least-squares:to project onto subsets of SDP cone (SDLS)

!"

#

min %X & C%2

AX = bX " 0

&'&'&' E!cient solvers for SDLS

&'&'&' Besides standard SDP tools, alternative techniques

&'&'&' Applications (still under development)

1 constructing covariance matrices in finance2 bounding procedure in combinatorial optimization3 regularization algorithm for solving SDP

(Final) conclusion: SDLS and next

Semidefinite least-squares:to project onto subsets of SDP cone (SDLS)

!"

#

min %X & C%2

AX = bX " 0

&'&'&' E!cient solvers for SDLS

&'&'&' Besides standard SDP tools, alternative techniques

&'&'&' Applications (still under development)

1 constructing covariance matrices in finance2 bounding procedure in combinatorial optimization3 regularization algorithm for solving SDP

thanks !

Recommended