A more reliable reduction algorithm for behavioral model extraction Dmitry Vasilyev, Jacob White...

Preview:

Citation preview

A more reliable reduction algorithm for behavioral model extraction

Dmitry Vasilyev, Jacob White

Massachusetts Institute of Technology

Outline

Background Projection framework for model reduction Balanced Truncation algorithm and

approximations AISIAD algorithm

Description of the proposed algorithm

Modified AISIAD and a low-rank square root algorithm

Efficiency and accuracy

Conclusions

Model reduction problem

• Reduction should be automatic • Must preserve input-output properties

Many (> 104) internal states

inputs outputs

few (<100) internal states

inputs outputs

Differential Equation Model

Model can represent: Finite-difference spatial discretization of PDEs Circuits with linear elements

A – stable, n x n (large)E – SPD, n x n

- state

- vector of inputs

- vector of outputs

Model reduction problem

n – large(thousands)!

Need the reduction to be automatic and preserve input-output properties (transfer function)

q – small (tens)

Approximation error Wide-band applications: model should have

small worst-case error

ω

=> maximal difference over all frequencies

Projection framework for model reduction

Pick biorthogonal projection matrices W and V

Projection basis are columns of V and W

Vxr x

x

n x xrV q

WTAVxr

Ax

Most reduction methods are based on projection

LTI SYSTEM

X (state)

tu

t

y

input output

P (controllability)Which modes are easier to reach?

Q (observability)Which modes produce more output? Reduced model retains

most controllable and most observable modes

Mode must be both very controllable and very observable

Projection should preserve important modes

Reduced system: (WTAV, WTB, CV, D)

Compute controllability and observability

gramians P and Q :

(~n3)AP + PAT + BBT =0 ATQ + QA + CTC = 0

Reduced model keeps

the dominant eigenspaces of PQ : (~n3)

PQvi = λivi wiPQ = λiwi

Balanced truncation reduction (TBR)

Very expensive. P and Q are dense even for sparse models

• Arnoldi [Grimme ‘97]:V = colsp{A-1B, A-2B, …}, W=VT , approx. Pdom only

• Padé via Lanczos [Feldman and Freund ‘95]colsp(V) = {A-1B, A-2B, …}, - approx. Pdom colsp(W) = {A-TCT, (A-T )2CT, …}, - approx. Qdom

• Frequency domain POD [Willcox ‘02], Poor Man’s TBR [Phillips ‘04]

Most reduction algorithms effectively separately approximate dominant eigenspaces of P and Q :

However, what matters is the product PQ

colsp(V) = {(jω1I-A)-1B, (jω2I-A)-1B, …}, - approx. Pdom

colsp(W) = {(jω1I-A)-TCT, (jω2I-A)-TCT, …}, - approx. Qdom

RC line (symmetric circuit)

Symmetric, P=Q all controllable states are observable and vice

versa

V(t) – inputi(t) - output

RLC line (nonsymmetric circuit)

P and Q are no longer equal! By keeping only mostly controllable

and/or only mostly observable states, we may not find dominant eigenvectors of PQ

Vector of states:

Lightly damped RLC circuit

Exact low-rank approximations of P and Q of order < 50 leads to PQ ≈ 0!!

R = 0.008, L = 10-5

C = 10-6

N=100

Lightly damped RLC circuit

Union of eigenspaces of P and Qdoes not necessarily approximate

dominant eigenspace of PQ .

Top 5 eigenvectors of P Top 5 eigenvectors of Q

AISIAD model reduction algorithm

Idea of AISIAD approximation:Approximate eigenvectors using power iterations:

Vi converges to dominant eigenvectors of PQ

Need to find the product (PQ)Vi

Xi = (PQ)Vi => Vi+1

= qr(Xi)

“iterate”

How?

Approximation of the product Vi+1 =qr(PQVi), AISIAD algorithm

Wi ≈ qr(QVi) Vi+1

≈ qr(PWi)

Approximate using solution of Sylvester equation

Approximate using solution of Sylvester equation

More detailed view of AISIAD approximation

Right-multiply by Wi

X X H, qxq (original AISIAD)

M, nxq

X X H, qxq

Modified AISIAD approximation

Right-multiply by Vi

Approximate!

M, nxq

^

Modified AISIAD approximation

Right-multiply by Vi

We can take advantage of numerous methods, which approximate P and Q!

X X H, qxqApproximate!

M, nxq

^

n x qn x n

Specialized Sylvester equation

A X + X H =-M

q x q

Need only column span of X

Solving Sylvester equation

Schur decomposition of H :

A X + X =-M~ ~

Solve for columns of X~

~

X

Solving Sylvester equation

Applicable to any stable A

Requires solving q times

Schur decomposition of H :

Solution can be accelerated via fast MVPAnother methods exists, based on IRA, needs A>0 [Zhou ‘02]

Solving Sylvester equation

Applicable to any stable A

Requires solving q times

Schur decomposition of H :

For SISO systems and P=0 equivalent to matching at frequency points –Λ(WTAW)

^

Modified AISIAD algorithm

1.Obtain low-rank approximations of P and Q2.Solve AXi +XiH + M = 0, => Xi≈ PWi

where H=WiTATWi, M = P(I - WiWi

T)ATWi + BBTWi

3. Perform QR decomposition of Xi =ViR

4. Solve ATYi +YiF + N = 0, => Yi≈ QVi

where F=ViTAVi, N = Q(I - ViVi

T)AV + CTCVi

5.Perform QR decomposition of Yi =Wi+1 R to get new

iterate. 6.Go to step 2 and iterate.7.Bi-orthogonalize W and V and construct reduced

model:(WTAV, WTB, CV, D)

LR-sqrt^ ^

^

^

For systems in the descriptor form

Generalized Lyapunov equations:

Lead to similar approximate power iterations

mAISIAD and low-rank square root

Low-rank gramians

LR-square root

mAISIAD

(inexpensive step) (more expensive)

For the majority of non-symmetric cases, mAISIAD works better than low-rank square root

(cost varies)

RLC line example results

H-infinity norm of reduction error (worst-case discrepancy over all frequencies)

N = 1000,1 input

2 outputs

Steel rail coolling profile benchmark

Taken from Oberwolfach benchmark collection, N=1357 7 inputs, 6 outputs

mAISIAD is useless for symmetric models

For symmetric systems (A = AT, B = CT) P=Q, therefore mAISIAD is equivalent to LRSQRT for P,Q of order q

RC line example

^ ^

Cost of the algorithm

Cost of the algorithm is directly proportional to the cost of solving a linear system:

(where sjj is a complex number)

Cost does not depend on the number of inputs and outputs

(non-descriptor case)

(descriptor case)

Conclusions The algorithm has a superior accuracy and

extended applicability with respect to the original AISIAD method

Very promising low-cost approximation to TBR

Applicable to any dynamical system, will work (though, usually worse) even without low-rank gramians

Passivity and stability preservation possible via post-processing

Not beneficial if the model is symmetric