Constrained stress majorization using diagonally scaled gradient projection Tim Dwyer and Kim...

Preview:

Citation preview

Constrained stress majorization using diagonally scaled

gradient projection

Tim Dwyer and Kim Marriott

Clayton School of Information TechnologyMonash UniversityAustralia

Separation constraints: x1+d ≤ x2 , y1+d ≤ y2

can be used with force-directed layoutto impose certain spacing requirements

Constrained stress majorization layout

(x1,y1) (x2,y2)

(x3,y3)

w1

w2

h2

h3x1+ ≤ x2

(w1+w2)2

y3+ ≤ y2(h2+h3)

2

In this talk we present: Diagonal scaling for faster gradient projection Changes to our active-set solver Evaluation of the new method

Constrained stress majorization Stress majorization - reduce overall layout stress Gradient projection - solve quadratic programs Active-set solver - projection step

“Unix” Graphdata Fromwww.graphviz.org

Stress majorization

stre

ss(X

)

(x,y)*

x*

y*x*y*

Instead of solving unconstrained quadratic forms we solve subject to separation constraints

i.e. Quadratic Programming

Constrained stress majorization

stre

ss(X

)

x*

y*x*y*

(x,y)*

Gradient projection

-g

-αg

x0

x1

Gradient projection

-αgx1

Gradient projection

d

x2

x1

βd

Gradient projection

x*

A badly scaled problem can have poor GP convergence Condition number of is

Convergence

m

M

A badly scaled problem can have poor GP convergence

Perfect scaling should give immediate convergence

Convergence

Newton’s method:

Transform entire problem s.t.

Scaled gradient projection

Is itself a quadratic program

Solve with active-set style method Move each xi to ui

Build blocks of active constraints

Projection operation

u subj to: xl+d ≤ xr

ui

b d

a

c

e

Is itself a quadratic program

Solve with active-set style method Move each xi to ui

Build blocks of active constraints Find most violated constraint xl+d ≤ xr

Projection operation

u subj to: xl+d ≤ xr

ui

b d

a

c

e

Is itself a quadratic program

Solve with active-set style method Move each xi to ui

Build blocks of active constraints: Find most violated constraint xl+d ≤ xr

Satisfy and add to block B Move B to average position of constituent vars

Projection operation

u subj to: xl+d ≤ xr

ui

b

d

a

c

e

Is itself a quadratic program

Solve with active-set style method Move each xi to ui

Build blocks of active constraints: Find most violated constraint xl+d ≤ xr

Add to block B (satisfy constraint) Move B to average position of constituent vars

Projection operation

u subj to: xl+d ≤ xr

ui

etc…

b

d

a

c

e

Is itself a quadratic program

Solve with active-set style method Move each xi to ui

Build blocks of active constraints: Find most violated constraint xl+d ≤ xr

Add to block B (satisfy constraint) Move B to average position of constituent vars

Projection operation

u subj to: xl+d ≤ xr

ui

etc…

b

d

a

c

e

Is itself a quadratic program:

Solve with active-set style method: Move each xi to ui

Build blocks of active constraints: Find most violated constraint xl+d ≤ xr

Add to block B (satisfy constraint) Move B to average position of constituent vars:

Projection operation

u subj to: xl+d ≤ xr

ui

etc…

b

d

a

c

e

Block structure is preserved between projection operations

Before each projection previous blocks are checked for split points (ensures convergence)

Projection operation: incremental

b

d

a

c

e

Block structure is preserved between projection operations

Before each projection previous blocks are checked for split points (ensures convergence)

In next projection blocks will be moved as one to new weighted average desired positions

Projection operation: incremental

b

d

a

c

e

Projection operation

b

d

a

c

e

Is itself a quadratic program

Scaling by a full n×n matrix turns separation constraints into linear constraints over n variables

u subj to: xl+d ≤ xr

ui

Scaling for stress majorization

Q is diagonally dominant:

Choose diagonal s.t.

Diagonal scaling: Separation constraints:

Need new expressions for Optimal block position

Lagrange multipliers for active constraints

Scaled separation constraints

minimize

subject to active constraints :

where:

minimum at:

Optimum block position

Optimum at:

where:

Optimum block position

Test cases

unconstrained constrained

Test cases

unconstrained constrained

Results

Improved convergence

Diagonal scaling is cheap to compute transforms separation constraints into scaled

separation constraints not full linear constraints so we can still use block tricks

is appropriate for improving condition of graph Laplacian matrices because they are diagonally dominant

particularly improves Laplacian condition if graph has wide variation in degree (often in practical applications)

Summary

Recommended