417

Constructive Methods of Wiener-Hopf Factorization

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Constructive Methods of Wiener-Hopf Factorization
Page 2: Constructive Methods of Wiener-Hopf Factorization

OT21: Operator Theory: Advances and Applications Vol. 21

Editor: I. Gohberg Tel Aviv University Ramat -Aviv, Israel

Editorial Office

School of Mathematical Sciences Tel Aviv University Ramat -Aviv, Israel

Editorial Board

A. Atzmon (Tel-Aviv) 1. A. Ball (Blacksburg) K Clancey (Athens, USA) L. A. Coburn (Buffalo) R. G. Douglas (Stony Brook) H. Dym (Rehovot) A. Dynin (Columbus) P. A. Fillmore (Halifax) C. Foias (Bloomington) P. A. Fuhrmann (Beer Sheva) S. Goldberg (College Park) B. Gramsch (Mainz) 1. A. Helton (La Jolla) D. Herrero (Tempe) M. A. Kaashoek (Amsterdam)

Honorary and Advisory Editorial Board

P. R. Halmos (Bloomington) T. Kato (Berkeley) S. G. Mikhlin (Leningrad)

Birkhauser Verlag Basel . Boston· Stuttgart

T. Kailath (Stanford) H.G. Kaper (Argonne) S. T. Kuroda (Tokyo) P. Lancaster (Calgary) L. E. Lerer (Haifa) M. S. Livsic (Beer Sheva) E. Meister (Darmstadt) B. Mityagin (Columbus) 1. D. Pincus (Stony Brook) M. Rosenblum (Charlottesville) 1. Rovnyak (Charlottesville) D. E. Sarason (Berkeley) H. Widom (Santa Cruz) D. Xia (Nashville)

R. Phillips (Stanford) B. Sz.-Nagy (Szeged)

Page 3: Constructive Methods of Wiener-Hopf Factorization

Constructive Methods of Wiener-Hopf Factorization

Edited by

I. Gohberg M. A. Kaashoek

1986 Birkhauser Verlag Basel . Boston· Stuttgart

Page 4: Constructive Methods of Wiener-Hopf Factorization

Volume Editorial Office

Department of Mathematics and Computer Science Vrije Universiteit P. O. Box 7161 1007 Me Amsterdam The Netherlands

Library of Congress Cataloging in Publication Data

Constructive methods of Wiener-Hopf factorization. (Operator theory, advances and applications; vol. 21) Includes bibliographies and index. 1. Wiener-Hopf operators. 2. Factorization of operators. I. Gohberg, I. (Israel), 1928-II. Kaashoek, M. A. III. Series: Operator theory, advances and applications; v.21. QA329.2.C665 1986 515.7'246 86--21587

CIP-Kurztitelaufnabme der Deutsdlen Bibliothek

Constructive methods of Wiener-Hopf factorization / ed. by I. Gohberg ; M. A. Kaashoek. - Basel ; Boston ; Stuttgart : Birkhiiuser, 1986.

(Operator theory ; Vol. 21)

NE: Gohberg, Israel [Hrsg.]; GT

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner.

ISBN-13: 978-3-0348-7420-5 DOl: 10.1007/978-3-0348-7418-2

© 1986 Birkhiiuser Verlag Basel

e-ISBN-13: 978-3-0348-7418-2

softcover reprint of the hardcover 1st edition 1986

Page 5: Constructive Methods of Wiener-Hopf Factorization

v

7hih volume conhihth ot a helection ot pape~h conce~­

ning a new app~oach to the p~otlem ot &iene~-Hopt tacto~ization

to~ ~ational and analytic mat~ix-valued (o~ ope~ato~-valued)

tunctionh. It ih a ~ehult ot developmenth which took place du~ing

the paht ten yea~h. 7he main advantage ot thih new app~oach ih

that it allowh one to get t1e &iene~-Hopt tacto~ization explicit­

ly in te~mh ot the o~iginal tunction. 7he hta~ting point ih a

hpecial ~ep~ehentation ot the tunction which ih taken t~om

~athematical SYhtemh 7heo~y whe~e it ih known ah a ~ealization.

10~ the cahe ot ~ational mat~ix-valued tunctionh the tinal

theo~emh exp~ehh the tacto~h in the tacto~ization and the indiceh

in te~mh ot the th~ee mat~iceh which appea~ in the ~ealization.

7hih took conhi~th ot two pa~th. Pa~t I conce~nh canon­

ical and, mo~e gene~ally, minimal tacto~ization. Pa~t II ih

dedicated to non-canonical &iene~-Hopt tacto~ization (i.e., the

tacto~ization indiceh a~e not all ze~o). Each pa~t hta~th with

an edito~ial int~oduction which containh hho~t dehc~iptionh ot

each ot the pape~h.

7hih took ih a ~ehult ot ~ehea~ch which to~ a la~ge

pa~t wah done at the V~~e linive~hiteit at Amhte~dam and wah

hta~ted atout ten yea~h ago. It ih a pleahu~e to thank the

depa~tment ot ~athematich and Compute~ Science ot the V~~e

linive~hiteit to~ ith huppo~t and unde~htanding du~ing all thohe

yea~h. &e alho like to thank the Economet~ich Inhtitute ot the

E~ahmuh linive~hiteit at Rotte~dam to~ ith technical ahhihtance

with the p~epa~ationh ot thih volume.

Amhte~dam, June 7986 I. 90hte~g, ~.A. Kaahhoek

Page 6: Constructive Methods of Wiener-Hopf Factorization

TABLE OF CONTENTS

PART I CANONICAL AND MINIMAL FACTORIZATION •..•.•.•...

Edi torial introduction ..•..........••..•.•...•.......•.

J.A. Ball and A.C.M. Ran: LEFT VERSUS RIGHT CANONICAL

FACTORIZATION •.•••••.•••••.•••••..••..•......•..••

1. Introduction ••.•..•••....•••..•••...••..•••.•..

2. Left and right canonical Wiener-Hopf

factorization ....••....•••..•.•••.••••.••••..••

3. Application to singular integral operators ..•••

4. Spectral and antispectral factorization on the

unit circle ......•••....•••....••.••••...••....

5. Symmetrized left and right canonical spectral

factorization on the imaginary axis ...•.•.•....

References .........•.......•...••••...•.•.........

H. Bart, I. Gohberg and M.A. Kaashoek: WIENER-HOPF

EQUATIONS WITH SYMBOLS ANALYTIC IN A STRIP •...•••.

O. Introduction .••..••...••..•.••.•..•••••.•..•••.

I. Realization •.•.•...•.•....•.•.....•...•...•....

1. Preliminaries •.•..••••...••.••..•...•••.••..

2. Realization triples •....••.......••.•.••••.•

3. The realization theorem ..••...•••.••.••••••.

4. Construction of realization triples ..••••••.

5. Basic properties of realization triples •..••

II. Applications .••...•.....••.......•.•.••••.....

1. Inverse Fourier transforms ..•.•••••..•••••••

VII

1

1

9

9

11

19

22

33

37

39

39

41

41

43

47

49

51

55

55

Page 7: Constructive Methods of Wiener-Hopf Factorization

VIII

2. Coupling ................................... .

3. Inversion and Fredholm properties .......... .

4. Canonical Wiener-Hopf factorization ........ .

5. The Riemann-Hilbert boundary value problem ..

References ....................................... .

57

62

66

71

72

I. Gohberg, M.A. Kaashoek, L. Lerer and L. Rodman: ON

TOEPLITZ AND WIENER-HOPF OPERATORS WITH CONTOUR-

WISE RATIONAL MATRIX AND OPERATOR SyMBOLS......... 75

O. Introduction................................... 76

1. Indicator...................................... 78

2. Toeplitz operators on compounded contours...... 81

3. Proof of the main theorems..................... 84

4. The barrier problem............................ 100

5. Canonical factorization........................ 102

6. Unbounded domains.............................. 107

7. The pair equation.............................. 112

8. Wiener-Hopf equation with two kernels.......... 119

9. The discrete case.............................. 123

References........................................ 125

L. Roozemond: CANONICAL PSEUDO-SPECTRAL FACTORIZATION

AND WIENER-HOPF INTEGRAL EqUATIONS................ 127

O. Introduction................................... 127

1. Canonical pseudo-spectral factorizations....... 130

2. Pseudo-f-spectral subspaces.................... 133

3. Description of all canonical pseudo-f-spectral

factorizations................................. 135

4. Non-negative rational matrix functions......... 144

5. Wiener-Hopf integral equations of non-normal

type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

6. Pairs of function spaces of unique solvability. 149

References........................................ 156

Page 8: Constructive Methods of Wiener-Hopf Factorization

I. Gohberg and M.A. Kaashoek:MINIMAL FACTORIZATION OF

INTEGRAL OPERATORS AND CASCADE DECOMPOSITIONS OF

IX

SySTEMS........................................... 157

O. Introduction................................... 157

I. Main results................................... 159

1. Minimal representation and degree........... 160

2. Minimal factorization (1)................... 161

3. Minimal factorization of Volterra integral

operators (1)............................... 164

4. Stationary causal operators and transfer

functions ••.•....•.••.•....• ~'. 0,',............. 168

5. SB-minimal factorization (1)................ 172

6. SB-minimal factorization in the class (USB). 174

7. Analytic semi-separable kernels............. 175

8. LU- and UL-factorizations (1)............... 175

II. Cascade decomposition of systems.............. 178

1. Preliminaries about systems with boundary

conditions.................................. 178

2. Cascade decompositions...................... 182

3. Decomposing projections..................... 182

4. Main decomposition theorems................. 184

5. Proof of Theorem II.4.1..................... 186

6. Proof of Theorem II.4.2..................... 191

7. Proof of Theorem II.4.3..................... 195

8. Decomposing projections for inverse systems. 198

III. Proofs of the main theorems.................. 202

1. A factorization lemma....................... 202

2. Minimal factorization (2)................... 203

3. SB-minimal factorization (2)................ 208

4. Proof of Theorem I.6.1...................... 211

5. Minimal factorization of Volterra integral

operators (2)......... .•.••••••••••••. .•••.• 215

6. Proof of Theorem I. 4 .1. . . • . • • . • • • • • • • • • . • . • • 220

7. A remark about minimal factorization and

inversion. • • • . • • . •• • . . • • • . • . . . . • • • • • • • .. • •. • 222

8. LU- and UL-factorizations (2)............... 222

Page 9: Constructive Methods of Wiener-Hopf Factorization

x

9. Causal/anticausal decompositions............ 225

References. . . . . . . . . . . . . . . . • . . . . . . . . • . . . . . . . . . . . . • . 229

PART II NON-CANONICAL WIENER-HOPF FACTORIZATION ..... . 231

Edi torial introduction................................. 231

H. Bart, I. Gohberg and M.A. Kaashoek: EXPLICIT WIENER-

HOPF FACTORIZATION AND REALIZATION................ 235

O. Introduction................................... 235

I. Preliminaries.................................. 237

1. Peliminaries about transfer functions....... 237

2. Preliminaries about Wiener-Hopf

factorization............................... 240

3. Reduction of factorization to nodes with

centralized singularities................... 243

II. Incoming characteristics...................... 254

1. Incoming bases.............................. 254

2. Feedback operators related to incoming bases 262

3. Factorization with non-negative indices..... 268

III. Outgoing characteristics..................... 272

1. Outgoing bases.............................. 272

2. Output injection operators related to out-

going bases................................. 277

3. Factorization with non-positive indices..... 280

IV. Main results.................................. 285

1. Intertwining relations for incoming and out-

goin~ data.................................. 285 2. Dilation to a node with centralized singula-

ri ties. . •• . • . . . . . ••. • .. • 291

3. Main theorem and corollaries................ 303

References. . . . . . . . . . . . .. ... . . . . . . . . . .. . . .... . . . .. . . . 314

H. Bart, I. Gohberg and M.A. Kaashoek: INVARIANTS FOR

WIENER-HOPF EQUIVALENCE OF ANALYTIC OPERATOR

FUNCTIONS...... ........ ...•.••... .•..••.••••....•• 317

Page 10: Constructive Methods of Wiener-Hopf Factorization

XI

1. Introduction and main result................... 317

2. Simple nodes with centralized singularities.... 322

3. Multiplication by plus and minus terms......... 326

4. Dilation....................................... 334

5. Spectral characteristics of transfer functions:

outgoing spaces................................ 338

6. Spectral characteristics of transfer functions:

incoming spaces................................ 343

7. Spectral characteristics and Wiener-Hopf equi-

valence. . . . • • . . . • . . • . . . . . . . . . . . . . • . . . . . . • • • • . • • 352

References. • . . . . . . . . . . . . . . . . • . . . . . . . . • • • . . . . . . . • . . 354

H. Bart, I. Gohberg and M.A. Kaashoek: MULTIPLICATION

BY DIAGONALS AND REDUCTION TO CANONICAL FACTOR-

IZATION................ ..•.•....... .•. ••• .•••..... 357

1. Introduction........................ .•• .•. .•... 357

2. Spectral pairs associated with products of

nodes. . • • . • . . . . . • . . . . . . . . . • • . . • . . . . . . • . . . . . . . • . 359

3. Multiplication by diagonals.................... 361

References........................................ 371

M.A. Kaashoek and A.C.M. Ran: SYMMETRIC WIENER-HOPF

FACTORIZATION OF SELF-ADJOINT RATIONAL MATRIX

FUNCTIONS AND REALIZATION......................... 373

O. Introduction and summary....................... 373

1. Introduction................................ 373

2. Summary..................................... 374

I. Wiener-Hopf factorization...................... 379

1. Realizations with centralized singularities. 379

2. Incoming data and related feedback operators 381

3. Outgoing data and related output injection

operators................................... 383

4. Dilation to realizations with centralized

singulari ties............................... 385

5. The final formulas.......................... 395

Page 11: Constructive Methods of Wiener-Hopf Factorization

XII

II. Symmetric Wiener-Hopf factorization........... 398

1. Duality between incoming and outgoing

operators................................... 398

2. The basis in ~m and duality between the

feedback operators and the output injection

operators................................... 402

3. Proof of the main theorems.................. 405

References. . • . . . • • . . • . . . • • . . . . . • . • • • • • . . • • • • • • . . . . 409

Page 12: Constructive Methods of Wiener-Hopf Factorization

PARI' I

CANONICAL AND MINIMAL FACTORIZATION

EDI'IDRIAL INTRODUCTION

The problem of canonical Wiener-Hopf factorization appears in dif­

ferent mathematical fields, theoretical as well as applied. To define this

type of factorization consider the matrix-valued function

(1) 00 ,

W(>..) = 1m - I elAtk(t)dt, -00 < >.. < 00, ·-00

1

where k is an m x m matrix function of which the entries are in L (-00 00) and T l' -rn. is themxm identity matrix. A (JUgfU:) c.al1oru.c.al (Wie.I1Vt-Hop6) 6ac-toJUzCLti..ol1

of W relative to the real line is a multiplicative decomposition:

(2) _00 < >.. < 00,

in which the factors W_ and W+ are of the form

o '>.. W_(>..) = ~ - I el t k1 (t)dt, Im>.. ~ 0,

-00 00 ,

W+(A) = 1m - 6 el >..tk2 (t)dt, Im A ~ 0,

where k1 and k2 are m x m matrix functions with entries in L1 (-00,0] and L1 [0,00),

respectively, and

Such a factorization does not always exist, but if W(>..) (or, more generally,

its real part) is positive definite for all real >.., then the matrix function

W admits a canonical factorization (see [8], [7]). Sometimes iterative methods

can be used to construct a canonical factorization. In the special case when W(>..) is a rational matrix function there is an algorithm of elementary row and

column operations which leads in a finite number of steps to a canonica.l

factorization provided such a. factorization exists.

In the late seventies a new method has been developed to deal with

Page 13: Constructive Methods of Wiener-Hopf Factorization

2

factorization problems for rational matrix functions. This method is based on

a special representation of the function, namely in the form

(3) W()') = 1 +C(U -A)-1B, _00 <). < "", m n where A : f1 + f1, B : ¢m + ¢n and C : f1 '+ <r are linear operators. The repre­

sentation (3), which comes from Mathematical Systems Theory, is called a

~ealization. The smallest possible number n in (3) is called the deg~ee of W ~

(notation: o(W)), and if n = O(W), then (21 is said to be a mi~al ~ealiza-

tion of W. The factorization method based on (3) has led to the following

canonical factorization theorem ([1], Section 4.5): THEOREM 1. Let W().) = ~ + C()'~ - A) -1B be a mi~al ~ealization, and

M.6ume A hM no ~eal ugenvalue. Then W adrn-i;t6 a c.anonic.al W,[en~-Hop6 6ac.­

:touzation ~e1a.:Uve :to :the ~eal line '[6 and orri.y '[6 :the 60Uowtng :two c.ond,[­

tiOM ~e 6ul6illed:

(i) AX: = A - Be.hM no ~eal ugenvalue,

(ii) f1 = M $ MX ,

wh~e M (~e..6p. MX) ,u., :the .6pac.e .6panned by:the ugenvec.:toM and gen~zed

ugenvec.:toM c.o~e..6pond,[ng :to :the ugenvalue..6 06 A (~e..6p. AX) ,[n :the upp~

(~e..6p . .eow~) hal6 plane. FM:theJrmMe, in :thcrt c.Me W adrn-i;t6 a c.anonic.al

6ac.:touzation W().) = W_().)W+().), ). E JR, wi.:th

W ().) = 1 + C(U - A)-1(I - II)B, - m n

W+().) = ~+CII(Un-A)-1B,

W ().)-1 = 1 -C(I-II)(U _Ax )-1B, - m n

W+().)-1 = Im- C(Un -Ax )-1IIB ,

wh~e II ,u., :the p~ojec.:Uon 06 ~ along M onto MX.

In [1] this theorem was obtained as a corollary of a more general

theorem about minimal factorization. Consider a factorization W()') = Wi (A )W2 (A), where Wi and W2 are rational m x m matrix· functions which are ana­lytic at infinity and have the value 1m at infinity (i.e., both Wi and W2 can

be represented in the form (3)). Then o(W):::; o(W1)+o(W2 ), and the factori­

zation W(A) = Wi (A)W2 (A) is called mi~a.e if OCW) = o(W1) + O(W2).

THEOREM 2. Let W(A) = 1 + C(U - A)-1B be a mi~a.e Jtealization, and m n

let W(A) = W1(A)W2(A) be a mi~a.e 6ac.:toflization (w.<.:th Wi and W2 analytic. at

Page 14: Constructive Methods of Wiener-Hopf Factorization

in6inity and W1(oo) = W2(oo) = 1m). Then th~e exi6~ a unique ~ M,Mx 06

.6ub.6pac.v.. 06 f1 .6uc.h that

and

(4)

(i) AM c M, AXMx c MX, (ii) ~ = M $ MX ,

= ~ + e(Hn - Afl(I - rr)B,

= I + Crr(H - A) -lB, m n

wh~e rr i.6 the pltojec.tion 06 ~ a1.ong M onto MX. ConveMe.llj, i6 M and MX aile

.6ub.6pac.v.. 06 ~ .6uc.h that (i) and (ii) hold, then W(A) = Wl (A)W2(A) with Wl and W2 give.n blj (4) and (5) and thM 6ac;to/tization i.6 min.imaL

3

Part I of the present book concerns a number of different further

developments connected with Theorems 1 and 2. In what follows we shall briefly

characterize each of the papers of this part. (We conclude the introduction

with a few historical remarks.)

The first paper "Le.M veMU.6 /tight c.anOMc.a1. Wie.n~-Hop6 6aUo/tization", by J.A. Ball and A.C.M. Ran, concerns connections between left (Le., the order

of the factors in (2) is interchanged) and right canonical factorization.

Starting from a left canonical factorization with the factors given in

realized form, the authors determine the conditions for the existence of a

right canonical factorization and they construct explicitly the factors in

terms of the data of the original left factorization. Both symmetric and non­symmetric factorizations are diSCUSSed. Several application are made.

The second paper "Wie.ne.!t-Hop6 e.quat<.OM with .6ymboL6 ana1.ljuc. in a .6:tJUp", by H. Bart, 1. Gohberg and M.A. Kaashoek, extends Theorem 1 to the

case when W is the Fourier transform of an mx m matrix function k with entries in ewltIL1C-oo,oo), where w is some negative constant. First for such functions

a realization is constructed. It turns out that in this case one has to re­

place the space ~ in the realization by an infinite dimensional Banach space

and one has to allow that the operators A and C are unbounded. To prove the

analogue of Theorem 1 for this class of matrix functions requires to overcome

a number of difficulties connected with semigroup theory. Applications concern

the Riemann boundary value problem and inversion and Fredholm properties of

Wiener-Hopf integral operators.

The third paper "On Toe.pLi;tz and Wie.ne.!t-Hop6 Ope.ltatoM with c.ontOUlt-

Page 15: Constructive Methods of Wiener-Hopf Factorization

4

wi6e JuLtiona1. matJUx and OpeJLa.:tOIt -6ymbofA", by 1. Gohberg, M.A. Kaashoek,

L. Lerer and L. Rodman, generalizes Theorem 1 for the case when the domain of

definition of the function W is not the real line but a curve consisting of

several disjoint closed contours. On each of the contours the function W is

represented in the form (3), but the operators A, B and C in the realizations

depend on the contour. The paper contains applications to different classes

of integral equations of convolution type.

The fourth paper "Canonical puudo--6peUltal 6ac.t0/Uzruon and W-i.eneJL­

Hop6 -i.nteghal equa.tioM", by L. Roozemond, deals with Theorem 1 for the case

when det W(·) has zeros on the real line. This case concerns realizations for

which the operator AX = A - BC has real eigenvalues. An appropriate generali­

zation of the factorization theorem is given. Applications are made to Wiener­

Hopf integral equations of so-called non-normal type.

The last paper of the present part "M.i.nimal 6ac.to/Uza.:U..on 06 -i.nte­

gJta.t opeJLa.:toM and cMcade decomp0-6WOM 06 -6yUem-6", by 1. Gohberg and

M.A. Kaashoek, extends the concepts of canonical and minimal factorization

into the direction of integral operators. Note that Theorems 1 and 2 can be

expressed as factorization theorems for integral operators of the type I + V,

where t

(V\p)(t) = \p(t) + f Ce(t-s)~\P(S)dS, 0 ~ t ~ '(. o

This observation has led the authors to far reaching generalizations of Theorems 1 and 2 for different classes of integral operators. For example, in this way lower/upper factorization of integral operators appears as a special kind of minimal factorization. Also the connections with Mathematical Systems Theory, which formed one of the starting points for Theorems 1 and 2, are

developed further here and concern now a cascade decomposition theory for

time varying linear systems with well-posed boundary conditions.

Since there seems to be some confusion about references concerning

Theorem 2 (see, e.g., [10], page 345 and [13], pages 13-16) we conclude this

introduction with some historical remarks. Theorem 2 is due to H. Bart,

1. Gohberg, M.A. Kq.ashoek and P. Van Dooren; it was produced at a mini­conference held at Arnsteyuam and Delft in February 1978 and published in [3].

A first predecessor of Theorem 2 is the famous Brodskii-Livsic division theorem for characteristic operator functions (see [4] and the references given there). In the theorem of Brodskii-Livsic the operator A-BC is just

Page 16: Constructive Methods of Wiener-Hopf Factorization

5

the adjoint A* of the main operator A and the factors are also required to

have this property. Consequentially, in the Brodskii-Livsic theorem the space

M is the important space and MX appears only implicitly as the orthogonal com­

plement of M. Another predecessor of Theorem 2 connected with the theory of

characteristic operator functions can be found in the work of V.M. Brodskii

[6], where the operator A - Be appears as (A *) -1 (see also the review [5] by

M.S. Brodskii).

A next predecessor of Theorem 2 is a factorization theorem due to

L.A. Salmovic. The main new feature in the Sallnovic theorem is the appearance

of a second main operator and a second invariant subspace. Let us describe

in more detail the result. Let W(·) and W(.)-l be given in realized form:

(6)

These two realizations are assumed to be minimal, and hence there is an in­

vertible operator S : ¢n .... ¢n such that

Now suppose that there exists an orthogonal projection P of ~ such that

and Sl1 = PSP is invertible on Im P. Then the Sahnovic theorem states that -1 T22 = (I - P)S (I - P) is invertible on Im (I - P) and W factors as W(A) =

W1(A)W2(A) with

where All = PAP and A22 = (I - P)A(1 - p). This factorization theorem was announced in [12] and its proof

appeared recently in [13]. (In [12] and [13] also a weaker version appears

with the spaces ¢n and r in the realizations (6) replaced by arbitrary

Hilbert spaces.) In 1978 N.M. Kostenko [11] proved a theorem (which is repro­

duced in [13]) which showed that under certain extra conditions on the loca­

tion of the zeros and the poles of the factors the conditions in the Sahnovic

theorem are not only sufficient but also necessary for factorization.

Page 17: Constructive Methods of Wiener-Hopf Factorization

6

Unfortunately, the proof in [11] (and also in [13]) contains a gap, and the

Kostenko theorem is not correct (in fact, this theorem does not take into

account the possibility of pole-zero cancellation between the factors).

As a distant relative preceding Theorem 2 we also mention the

Gohberg-Lancaster-Rodman factorization theorem for monic matrix polynomials

which describes factorization in the class of monic matrix polynomials in

terms of certain invariant subspaces of the companion matrix (see [9] and

the references given there; also [2]).

REFERENCES

1. Bart, H., Gohberg, 1. and Kaashoek, M.A.: Uinhnal 6ac:t0!U..za.:tlon 06 mOvtJUx and opeJtatOft 6unctiolU, Operator Theory: Advances and Applications, Vol. 1, Birkhauser Verlag, Basel etc., 1979.

2. Bart, H., Gohberg, I. and Kaashoek, M.A.: Operator polynomials as inverses of characteristic functions, Integftal EquatiolU and OpeJta.tOft Theofty 1 (1978), 1-12.

3. Bart, H., Gohberg, I., Kaashoek, M.A. and Van Dooren, P.: Factori­zations of transfer functions, SIAM J. ConvwlOpt. 18 (6) (1980), 675-696.

4. Brodskii, M.S.: T!U..angu.ta.Jt and JOftdan ftepfte.6enta.:tlolU 06 Llnea.Jt opeJta.tO!t6, Transl. Math. Monogroaphs, Vol. 32, Amer. Math. Soc. Providence, R.I., 1970.

5. Brodskii, M.S.: Unitary operator colligations and their characteris­tic functions, U~pehi Mat. Nauk 33 (1978), 141-168 (Russian) = R~~ian Math. SU!tvey~ 33 (1978), 159-191.

6. Brodskii, V.M.: Some theorems on knots and their characteristic functions, Funkt4ional. Anal. i P~ozhen. 4 (1970), 95-96 (Russian) = Functional Analy~~ and App-UQa.:tiolU 4 (1970), 250-251.

7. Clancey, K. and Gohberg, I.: Fac:to!U..za.:tion 06 mOvtJUx 6unctioIU and ~ingu.ta.Jt integftal 0 peJtatO!t6, Operator Theory: Advances and Appli­cations, Vol. 3, Birkhauser Verlag, Basel etc, 1981.

8. Gohberg, I. and Krein, M.G.: Systems of integroal equations on a half line with kernels depending on the difference of arguments, U~pehi Mat. Nauk 13 (1958), no. 2 (80), 3-72 (fussian) = AmeJt. Math. SOQ. T!ta.n~l. (2) 14 (1960), 217-287.

9. Gohberg, I., Lancaster, P. and Rodman, L.: MatJtix polynomi~, Academic Press. Inc., London, 1982.

10. Helton, J.W. and Ball, J.A.: The cascade decompositions of a given system vs the linear fractional decompositions of its transfer

Page 18: Constructive Methods of Wiener-Hopf Factorization

7

function, Integ~at Equation6 a»d Op~ato~ Theo~y 5 (1982), 341-385.

11. Kostenko, K.: A necessary and sufficient condition for factorization of a rational operator-function, Fu»ktJio»at. A»at. i P~ozhe». 12 (1978), 87-88 (Russian) = Fu»ctio»at A»aly~i6 a»d Application6 12 (1978), 315-317.

12. Sahnovic, L.A.: On the factorization of an operator-valued transfer function, Vok!. Akad. Nauk SSR 226 (1976), 781-784 (Russian) = Soviet Math. Vok!. 17 (1976), 203-207.

13. sahnovic, L.A.: Problems of factorization and operator identities, U~pehi Mat. Nauk 41 (1986), 3-55.

Page 19: Constructive Methods of Wiener-Hopf Factorization

Operator Theory: Advances and Applications, Vol. 21 ©1986 Birkhauser Verlag Basel

LEFT VERSUS RIGHT CANONICAL WIENER-HOPF

FACTORIZATION

Joseph A. BallI) and Andr~ C. M Ran 2)

9

In this paper the existence of a right canonical Wiener-Hopf factorization for a rational matrix function is characterized in terms of a left canonical Wiener-Hopf factorization. Formulas for the factors in a right factorization are given in terms of the formulas for the factors in a given left factorization. Both symmetric and nonsymmetric factorizations are discussed.

1. INTRODUCTION

It is well known that Wiener-Hopf factorization of matrix and

opera.tor valued fUllctions has wide applications in analysis and electrical

engmeermg. Applications include convolution integral equations (see ego

[GFD, singular integral equations (see ego [CGD, Toeplitz operators, the

study of Riccati equations (see ego [W] and [H]) and, more recently, the

model reduction for linear systems. The latter application, presented in

detail by the authors in [BR 1,2] both for continuous time and discrete

time systems, lies at the basis of the present paper. We mention that

Glover [GI] first solved the model reduction problem for continuous time

systems without using Wiener-Hopf factorization.

The model reduction problem for discrete time systems as

presented m [BR 2] leads to the following question concernmg

Wiener-Hopf factorization. We are given a pxq rational matrix function

K()')=C()'I-A)-1 B with all its poles m the open unit disk, and a

number 0" > O. Cons truct the function

1) The first author was partially supported by a grant from the National Science Foundation.

2) The second author was supported by a grant from the Niels Stensen Stichting at Amsterdam.

Page 20: Constructive Methods of Wiener-Hopf Factorization

10 Ball and Ran

(1.1 )

One needs a factorization of W(>.) of the form

(1.2) W(~) ~ X+(~-l)' [!p -Ux+(~) where X+ and its inverse is analytic on the disk. Note that (1.1) and

(1.2) constitute symmetric left and right Wiener-Hopf factorization of

W(>'), respectively. This problem was solved in [BR 21 using the

geometric factorization approach given in [BGK 11. In this paper we shall discuss the following more general

problem. Given a left canonical Wiener-Hopf factorization

with respect to a contour r, where Y + and Y _ are gIven m realization

form, give necessary and sufficient conditions for the existence of a right

canonical Wiener-Hopf factorization

and provide formulas for X and X+ m realization form. This IS discussed

m Section 2.

In Section 3 we give applications of the mam result of Section 2

to the invertibility of singular integral operators. As is well known the

invertibility of the singular integral operator on LP(r), where r is a

contour in the complex plane {:, with symbol W is equivalent to the

existence of a right canonical Wiener-Hopf factorization for W on r. Theorem 2.1 then gives necessary and sufficient conditions for the

invertibility of the singular integral operator with symbol W under the

assumption that the singular integral operator with symbol W- 1 (or with

Page 21: Constructive Methods of Wiener-Hopf Factorization

Canonical factorization 11

symbol WT ) IS invertible and the right canonical factorization of W- 1

(respectively WT ) IS known. We thank Kevin Clancey for pointing out to

us this application of our main result.

In Sections 4 and 5 we indicate how symmetrized verSiOns of the

factorization problem can be handled also as a direct application of

Theorem 2.1. Section 4 deals with the situation where the symmetry is

with respect to the unit circle; this is the case which comes up for

discrete time systems. In Section 5 the symmetry is with respect to the

lffiagmary axis; this is germane to model reduction for con tinuous time

systems. In this way we may view some factorization formulas from [BR

1,2] needed for the model reduction problem as essentially specializations

of the more general formulas in Theorem 2.1.

2. LEFT AND RIGHT CANONICAL WIENER-HOPF

FACTORIZATION

In this section we shall analyze the existence of a right

canonical Wiener-Hopf factorization for a given rational matrix function in

terms of a gIven left canonical Wiener-Hopf factorization. The

factorizations are with respect to some fixed but general contour r m

the complex plane. The analysis is built on the geometric approach to

factorization gIven m [BGK I].

terminology from [BGK I].

We first establish some notation and

By a Cauchy contour r we mean the positively oriented

boundary of a bounded Cauchy domain in the complex plane <C; such a

contour consists of finitely many nonintersecting closed rectifiable Jordan

curves. We denote by F + the interior domains of r and by F the

complement of its closure F + in the extended complex plane <CU{oo}.

Now suppose that W is a rational mxm matrix function invertible

Page 22: Constructive Methods of Wiener-Hopf Factorization

12 ball and Ran

at 00 and with no poles or zeros on the contour r. By a right

canonical (spectral) factorization of W with respect to r we mean a

factorization

(2.1) ()..Er)

where Wand W + are also rational mxm matrix functions such that W

has no poles or zeros on F _ (including 00) and W + has no poles or zeros

on F +. If the factors W _ and W + are interchanged in (2.1), we speak of

a left canonical (spectral) factorization.

We assume throughout that all matrix functions are analytic and

invertible at 00; without loss of generality we may then assume that the

value at 00 is the identity matrix I. By a realization for the matrix m

function W we mean a representation for W of the form

Here A is an nxn matrix (for some n) while C and Bare mxn and nxm

matrices respectively.

Left and right canonical factorizations are discussed at length in

the book [BGK 1]. There it is shown how to compute realizations for the

factors W _, W + for a right canonical factorization W=W _ W + if one knows

a realization W()")=Im +C()"In _A)-I B for W. We shall suppose that we

know the factors Y + and Y of a left canonical factorizations

W()..)=Y ()")Y ()..), say + -

and

Page 23: Constructive Methods of Wiener-Hopf Factorization

Canonical factorization 13

We then give a necessary and sufficient condition for a right canonical

factorization to exist, and in that case we compute the factors Wand

W + of a right canonical factorization W(A)=W jA)W +(A) in terms of the

realizations of Y + and Y _. The analysis is a straightforward application

of the geometric factorization principle in Chapter I of [BGK 1] (see also

[BGKvDj). The result is as follows.

THEOREM 2.1. Suppose that the rational mxm matrix function

W(>") has left canonical factorization W(>")=Y +(>")Y _(>..) where

and

and AX:=A -B C are n xn matrices with - - - -- - -

spectra m F , and that A and A x:=A -B Care n xn matrices with ---- +- + + ++- + +=..:..:...:.::.=

spectra m F +. Let P and Q denote the unique solutions of the Lyapunov

equations

(2.4) AXp_PA x - +

(2.5) A Q-QA = -B C . + - + -

Then W has ~ right canonical factorization if and only if the n + xn +

matrix I -QP IS invertible, .2!" equivalently, .if and only if the n _ xn n+

matrix In -PQ IS invertible. When this ~ the case, the factors W (>..)

and W +(>..) for ~ right canonical factorization W(>")=W _(>..)W +(>..) are

Page 24: Constructive Methods of Wiener-Hopf Factorization

14 Ball and Ran

given ~ the formuias

(2.6) W_(>') = I+(C+Q+CJ(>'In -AJ-1(I-PQ)-1(-PB+ +BJ

and

with inverses given ~

and

PROOF. From the realizations (2.2) and (2.3) for the functions

Y +(>.) and Y _(>.) we compute a realization for their product

W(>')=Y +(>')Y _(>.) as

W(>.) = I + C(>'I-A)-l B

where

(see p.6 of [BGK 11). The matrix AX:=A-BC equals

where A~:=A+ -B+C+ and AX:=A -B C. Now by assumption the

spectrum u(A+) of A+ IS contained in F _, while that of A is contained

In F+. From the triangular form of A we see that

u(A)=u(A+)Uu(A_) and that the spectral subspace for A associated

Page 25: Constructive Methods of Wiener-Hopf Factorization

Canonical iactorization 15

with F Now the spectral subspace M for A

corresponding to F+ IS determined by the fact that it must be

complementary to the spectral subspace Im[I~-l for F , and that it must -

be invariant for A. The first condition forces M to have the form

M=lm [~n _] for some n+xn_ matrix Q (the "angle operator" for M). The

second condition (AMCM) requires that

for some n xn matrix X. From the second row ill this identity we see

that X=A and then from the first row we see that

A+Q + B+C_ = QA_.

Thus the angle operator Q must be a solution of the Lyapunov equation

(2.5). By our assumption that the spectra of A+ and A_ are disjoint, it

follows directly from the known theory of Lyapunov equations that there

is a unique solution Q. We have thus identified the spectral subspace M

of A for F + as M=Im [~n _] where Q is the unique solution of (2.5).

Since by assumption A: has its spectrum in F while A x has

its spectrum ill F +' the same analysis applies to AX. We see that the

spectral subspace of AX for F + IS the coordinate space 1m [ I: + 1 while the

spectral subspace MX of AX for F IS the space

Page 26: Constructive Methods of Wiener-Hopf Factorization

16 Ball and Ran

where P is the unique solution of the Lyapunov equation (2.4). Again,

since the spectra of A~ and A: are disjoint, we also see directly that

the solution P exists and is unique.

Now we apply Theorem 1.5 from [BGK 11. One concludes that

the function W has a right canonical factorization W(A)=W _(A)W +(A)

n +n . if and only if C + - =M+Mx, that is, if and only if

n++n C -

(Here + indicates a direct sum decomposition.)

One easily checks that this direct sum decomposition holds if

[ I QI n ]I·S I·nvertl·ble. By standard and only if the square matrix P n +

row and column operations this matrix can be diagonalized III either of two

ways:

[~n+ Q ] _ [I ~] [I OQP n [~ n In - 0

[~ n [6 ~ -PQ] [6 ~].

Thus we see that the invertibility of [~ ~] IS equivalent to the

invertibility of I-QP and also to the invertibility of I-PQ.

Now suppose this condition holds. Let IT be the projection of

Cn++n _ onto MX=lm[~n+] along M=lm[~n_]. It is straightforward to

compute that

Page 27: Constructive Methods of Wiener-Hopf Factorization

Canonical factorization 17

and that

From Theorem 1.5 [BGK 11 one obtains the formulas for the

right canonical spectral factors of W:

W _(),) = I + C(1-rr)(),I-A(I-rr))-1(I-rr)B

and

Let S: {; n_ -+lm(l-rr) be the operator S = [~n J with inverse S-1 =

[0 I II . Similarly, let T:{;n+-+lmrr be the operator T=[~n+] n _ Im(l-rr)

. h . T-1 WIt Inverse [I 011 . The above formulas for Wand W n+ Imrr - +

may be rewritten as

(2.11 )

and

W +(),)=I + CrrT(),ln -T-1rrArrT)-I T-l rrB. +

(2.12)

Now one computes from formulas (2.10) and the Lyapunov equations (2.4)

and (2.5) that

S-IAS = [0 In_I[~+ B~~_] [~nJ = [0 In_I[~nJA- = A

as well as

Page 28: Constructive Methods of Wiener-Hopf Factorization

18 Ball and Ran

and

S-I(I-lI")B = (I _PQ)-I(_PB +B ). n + -

Similarly we compute

as well as

and

Substituting these expressions into formulas (2.11) and (2.12) yields the

expressions for W _(>.) and W +(>.) in the statement of the Theorem. The

formulas for W _(>.)-1 and W +(>.)-1 follow immediately from these and the

general formula for the inverse of a transfer function

(2.13)

(see [BGK 11 p.7) once the associate operators

A~: = A_ - (I_PQ)-I(_PB+ +Bj(C+Q+C_)

and

are computed. Again use the Lyapunov equations to deduce that

(-PB+ +Bj(C+Q+Cj

- PB+C+Q + B_C+Q - PB+C_ + B C

Page 29: Constructive Methods of Wiener-Hopf Factorization

Canonical factorization

and thus

- PB+C+Q + (A_ -B_CJPQ -P(A+ -B+C+)Q - PQA_

+ PA+Q+B_C_ = A_PQ - PQA_ + B_CjI-PQ)

x: = (I-PQ)-1[(I-PQ)A_ - A_PQ + PQA_ - B_CjI-PQ)]

= (I-PQ)-1A:(I-PQ).

A completely analogous computation gives

x: = (I-QP)(A+-B+C+)(I-QP)-1

= (I-QP)A~(I-QP)-1.

19

Now apply formula (2.13) to the representations for W j>-) and W +(>-)

in the Theorem together with the above expressions for X~ and Xx to

derive the desired expressions for W _(>-)-1 and W +(>-)-1.

REMARK. Theorem 2.1 actually holds in greater generality than

that stated here. Specifically the matrix functions Y _ and Y + may be

irrational as long as they have (possibly infinite dimensional) realizations as

in the Theorem.

3. APPLICATION TO SINGULAR INTEGRAL OPERATORS

For r a contour as above, introduce the operator of singular

integration Sr: L~(r)""'L~(r) on r by

(Sr)<p)(>-) = rr\ I r <p)!>-) dT

where integration over r is in the Cauchy principal value sense. Introduce

Pr=t(I+S r ), Qr=t(I-S r ); then P r and Qr are projections on

L~(r). We consider the singular integral operator S: L~(r)""'L~(r)

Page 30: Constructive Methods of Wiener-Hopf Factorization

20 Ball and Ran

(3.1) (8cp)( >. )=A( >.)(P rcp)(>, )+B(>' )(QrCP)( >.),

where A(>') and B(>') are rational matrix functions with poles and zeros

off r. The symbol of 8 is the function W(>')=B(>.)-1 A(>'). It is

well known (see ego [OGI, [GKJ) that 8 is invertible if and only if W(>.)

admits a "right canonical factorization

(3.2) W(>')=W _(>.)W +(>.)

in which case

(3.3) (8- 1cp)(>.)

= W~ 1 (>.)(PrW= I B-lcp)(>.)+W _(>.)(QrW= I B-1cp)(>.).

Theorem 2.1 can be used to study the invertibility of 8 in terms of the

invertibility of either one of the following operators

(81 cp)(>.)=B(>.)(Prcp)(>.)+A(>.)(QrCP)(>')

(82cp)(>. )={B(>') -I} T (P rcp)( >. )+{A(>') -I} T (QrCP)(>').

Note that the symbol of 81 is W(>.)-1 and the symbol of 82 is W(>.)T.

More precisely we have the following theorems, the proofs of which are

immediate by combining the above remarks with Theorem 2.1.

THEOREM 3.1 Assume that 81 is invertible and let the right

factorization of the symbol of 81 be given II

y _(>.)-1=lm -0 _(>.I_A~)-1B_,

Y (>.)-1=1 -0 (>.I_AX)-1 B + m + + +.

Let P and Q denote the unique

Page 31: Constructive Methods of Wiener-Hopf Factorization

Canonical factorization 21

solutions of the Lyapunov equations (2.4) and (2.5), respectively. Then S

is invertible if and only if I-QP !5! invertible, or equivalently, if and only

if I-PQ ~ invertible.

THEOREM 3.2 Assume that 8 2 ~ invertible and let the right

factorization 2! the symbol of 8 2 be given Qy

W().)T =B().)TA -l().)T =y _().)Ty +().) T

Y _().)T =Im +B~()'I-A~)-lC~

y ().)T=I +BT()'I_AT)-lCT. + m + + +

8et A x =A -B _ C _ and A! =A + -B + C +' Let P and Q denote the unique

solutions of the Lyapunov equations (2.4) and (2.5), respectively. Then 8

is invertible if and only if I-QP ~ invertible, .2!' equivalently, if and only

if I-PQ is invertible.

In both cases the formulas for the factors W_().), W+().) in

the factorization (3.2) of the symbol of 8 as well as the formulas for

their inverses are given by (2.6)-(2.9).

formula for 8- 1.

Then (3.3) gives an explicit

The two theorems above can be reformulated of course

completely in terms of 8 and it symbol W().). Actually if W()') admits a

left canonical factorization W()')=Y +()')Y _().) with factors Y +,Y _ as m

(2.2), (2.3) then invertibility of 8 is equivalent to invertibility of I-PQ,

where P and Q are the unique solutions of (2.4) and (2.5), respectively.

In fact, in terms of [BGK 2] I-PQ is an indicator for the operator 8, as

well as for the Toeplitz operator with symbol W. Indeed, according to

Page 32: Constructive Methods of Wiener-Hopf Factorization

22 Ball and Ran

[BGK 2], Theorem III.2.2 an indicator for S IS given by the operator

pX I A: ImP-+ImPX, where P (resp. pX) IS the spectral projection of ImP

A(resp. AX) corresponding to F + (here A,Ax come from a realization of W).

From the proof of Theorem 2.1 one sees easily that ImP=Im [~l and pX

Hence pX I A is actually given by I-PQ. ImP

4. SPECTRAL AND ANTISPECTRAL FACTORIZATION ON THE

UNIT CIRCLE

Suppose that W( A) IS a rational mxm matrix function analytic

and invertible on the unit circle {I A I =1} such that W(.!/ = W(A). A

* For convemence 10 the sequel, in general 10 this section we shall use W

* 1 * * to designate the function W (A)=W(=). Note that W=W for a rational A

matrix function W if and only if W( A) is self -adjoin t for I A I = 1.

Since W(A) by assumption is also invertible on the unit circle, W(eiT )

must have a constant number (say p) of positive eigenvalues and q=m-p

of negative eigenvalues for all real T. By a signed antispectral

factorization of W (with respect to the unit circle) we mean a factorization

of the form

where Y _(A) is analytic and invertible on the exterior of the unit disk

Page 33: Constructive Methods of Wiener-Hopf Factorization

Canonical factorization 23

i5 ={ I A I ~1}. e By a signed spectral factorization of W (with respect

to the unit circle) we mean a factorization of the form

where X+(A) IS analytic and invertible on the closed unit disk

D={IAI~l}. The problem which we wish to analyze In this section

is a symmetrized version of that considered in Section 2: namely, given a

signed antispectral factorization W(A)=Y:(A) [~p _ ~ J Y _(A), gIve

necessary and sufficient conditions for the existence of a signed spectral

factorization, and, for the case where these are satisfied, give an explicit

formula for a spectral factor X +(A).

* We first remark that a function W=W (invertible on the unit

circle) has a signed spectral factorization if and only if it has a canonical

right Wiener-Hopf factorization with respect to the unit circle. Indeed, if

-~qlX+(A) is a signed spectral factorization, then

W(A)=W _(A)W +(A) where W _(A):=X:(A) [~p _ ~ q] and W +(A):=X+(z) IS

a right canonical factorization as discussed in § 2 (with the contour r chosen to be the unit circle {I A I =1}). Note that here we do not insist

on normalizing the value at infinity to be 1m' Conversely, suppose

W(A)=W jA)W +(A) is a right canonical factorization with respect to the

* * * unit circle. Then W(A)=W (A)=W +(A)W _(A) is another. But . it is

known that such (nonnormalized) factorizations are unique up to a constant

Page 34: Constructive Methods of Wiener-Hopf Factorization

24 Ball and Ran

* invertible factor; thus W (> .. )=W (A)C for some nonsingular mxm matrix c, - +

and Plugging In A=1 we see that

c=W +(1) *-IW(1)W +(1)-1 is self -adjoint with p positive and q negative

eigenvalues. We then may factor c as c=d * [ ~ p _ ~ q 1 d and

W(A)=X:(A) [~p X+(A)=dW +(A).

_ ~ J X+(A) IS a signed spectral factorization, where

It remains only to use this connection and the work of Section

2 on Wiener-Hopf factorization to get an analogous result for signed

spectral factorization.

THEOREM 4.1. Suppose that the rational mxm matrix function

* W(A)=W (A) has .! signed antispectral factorization

Y (A) = Y (00)[1 + C (AI-A )-IB 1 - - m - - -

* * open unit disk D. We also assume that Y joo) and Y _(00) = Y _(0) ~

* invertible, ~ W(oo) and W(O)=W(oo) are invertible. We denote h \II the

Hermitian matrix

Let P and Q denote the unique solutions of the Lyapunov equations

Page 35: Constructive Methods of Wiener-Hopf Factorization

Canonical factorization 25

( 4.1)

Then W has! signed spectral factorization i! and only the matrix I-QP ~

invertible.

Suppose ~ that this is the case. §.2 I-QP is invertible. Let

Z=(I_QP)-I. Then the Hermitian matrix,

1* * *1* * (4.3) c=W - WC_A= Z B_ - B_ZA_ - C_ W + B_ZQB_

IS invertible and has p positive and q negative eigenvalues. Thus c has ~

f ac toriza tion

* [I c = d op

for ~ invertible matrix d. Then

is ~ signed spectral factorization of W(A) .where

(4.4) X+(A) = d{I + (_W- 1B:(A:)*-1 + C_P)

. Z(AI-A: -1)-I(A: -IC: W-QB_n

with

(4.5) X+(A)-1 = {I - (_w- 1B:(A:)*-1 + C_P)

• (AI-(A:) *-I)-IZ(A: -IC: w-QB J}d- 1,

PROOF. In Theorem 2.1 it was assumed that W(oo) I and m

Page 36: Constructive Methods of Wiener-Hopf Factorization

26 Ball and Ran

that W()") = Y +()")Y _()..) where Y +(oo)=Y _(oo)=lm' We thus consider

here the matrix function W(oo)-IW()..) and its left canonical factorization

(with respect to the unit circle)

where

By assumption

Y (oo)-ly ()..) = I + C ()"I-A )-I B - - m - - -

where A , B _, C _ are given. Note that then

**1* **1 = [I - B A - C - B A -m - - - --

and thus

* [I W(oo) = Y Joo) 0 P

* * 1 * = (1m - B_A_ - CJIII

Thus Y +()..) has the form

Y ()..) = 111- 1(1 _ B*A*-IC*)-1 + m - - -

Page 37: Constructive Methods of Wiener-Hopf Factorization

Canonical factorization 27

1 **1* 1**1 = 1 - \II - (I - B A - C ) - B A -m m -- - --

Certainly, W(oo)-lW(A) has a right canonical factorization if and only if

W(A) does, and by the remarks above, this in turn is equivalent to the

existence of a signed spectral factorization for W. To get conditions for a

right canonical factorization for W(OO)-~W(A), we apply Theorem 2.1 with

A _, B _, C _ as given here, but with A +' B +' C + given by

A+ = A*-l

and

so

We next compute

*1 * ** *1 = A_ - [(A:) + C_BJ(A:) -

= (A:)*-l

Thus the Lyapunov equation (2.4) for this setting becomes

Page 38: Constructive Methods of Wiener-Hopf Factorization

28 Ball and Ran

which we prefer to write in the equivalent form (4.1). Similarly the

Lyapunov equation (2.5) becomes upon substituting the above expressions

for A+ and B+

• 1 • 1 • A_ - Q - QA_ = -A_ - 0_1110

which is equivalent to (4.2). Thus the invertibility of I-QP, where Q and

P are the unique solutions of the Lyapunov equations (4.1) and (4.2) is a

necessary and sufficient condition for the existence of a signed spectral

• factorization of W(X). Note that P is a solution of (4.1) whenever Pis.

By our assumptions on the spectrum of A~, the solution of (4.1) is unique,

• • and hence P=P. Similarly Q=Q for the solution Q of (4.2).

Now suppose I-QP is invertible and set Z=(I_QP)-I. In

computations to follow, we shall use that

• 1 • • Z = (1-PQ) - , PZ = Z P, ZQ = QZ .

By the formulas (2.6)-(2.9) in Theorem 2.1, we see that W(oo)-IW(X)

has the right canonical factorization W(oo)-IW(X)=W _(X)W +(X) where

(4.6) W+(X) = I + (_V-lB:(A~)·-1 + O_P)Z

• (XI-A: -1)-I(A: -10 : 1I1-QB_)

and

(4.7) W+(z)-1 = I - (_1I1-1B:(A~)·-1 + O_P)

• (XI - (A~)·-I)-IZ(A: -1 0 : 111 - QB _).

In particular W(~)=W(oo)W _(~). W +(~) is a right canonical factorization of

Page 39: Constructive Methods of Wiener-Hopf Factorization

Canonical factorization 29

* * * * W, as is also W(>.) = W (>.) W +(>.). W J>.)W(oo). By the umqueness

of the right canonical factorization, we know that there is a (constant)

* invertible matrix c such that W(oo)W j>.)=W +(>.)c. Thus

By evaluating both sides of (4.8) at a point >. on the unit circle and

using the original signed antispectral factored form for W, we see that c IS

invertible with p positive and q negative eigenvalues. Thus c can be

factored "" e~d' [~p _ ~ J d for an invertible matrix d. Then (4.8J

becomes

* *[1 W(>.) = W +(>.)d 0 p

a signed spectral factorization of W(>'), where X+(>')=dW +(>.). Using

formulas (4.6) and (4.7), we get the desired formulas (4.4) and (4.5) for

X+(A) and X+(A)-l once we verify that the constant c in (4.8) is given

by formula (4.3).

To evaluate c, we set A =00 in (4.8) to get

* 1 c = W +(00)- W(oo)

* 1 = W +(0) - W(oo).

From (4.7) we see that

W +(0) *-1 = (W +(0)-1) *

= I + (WC_A=1_B:Q)Z*A:(-(A:)-lB_W-1+PC:)

Page 40: Constructive Methods of Wiener-Hopf Factorization

30 ball and Ran

while we have already observed that * * 1 * W(oo)=(I-B _A _ - 0 Jw. To

* 1 compute the product c=W +(0) - W(oo) , we first simplify the expression

(-(A~)-IB_w-l+PO:)W(oo) as follows:

1 1 * **1* (-(A x) - B w- +PO )(w-B A - 0 w) - - - - - -

* * * 1 * - PO B A - 0 w

* * * 1 * - PO B A - 0 w

(from the Lyapunov equation (4.1))

Thus

= _(A~)-IB_ + PO: w + (A~)-lpA:-I0: w

* ** *1* ***1* - P(A _ -0 _ B JA _ - 0 _w - PO _ B _A _ - O_w

= (A~)-I[_B_ + PA: -10 : wJ.

* * 1 * 1 * = w-B A - 0 w - WO A - Z B - - -1 * * 1 * + WO A - Z P A - 0 w - -

* Now use that QZ P = ZQP = -I+Z to. get

Page 41: Constructive Methods of Wiener-Hopf Factorization

Canonical factorization 31

* *-1 * -1 * c = 111 - B A 0 111 - 1110 A Z B

1 * * 1 * + 1110 A - Z P A - 0 111 - -

+ 1110 A-1pZA*-10*1I1 - -

which agrees with (4.3). This completes the proof of Theorem 4.1.

The model reduction problem for discrete time systems from [BR

21 involves the application of Theorem 4.1 to a function Y J)..) of a

special form.

OOROLLARY 4.2. Suppose K(z)=C()..I-A)-1 B is ~ pxq rational

matrix function of McMillan degree n such that all poles of K are in the

open unit disk D. Thus ~ may assume that o-(A) c D. For 0- ~

positive real number, ~ the matrix function W(z) Q.y

and let P and Q be the unique solutions 2f the Lyapunov equations

(4.9) 2 * 2 * A(o- P)A - (0- P) = BB

* * (4.10) A QA - Q 00.

Then W()") has ~ signed spectral factorization if and only if the matrix

I-QP is invertible.

When this is the case, the factor X+()") for ~ signed spectral

Page 42: Constructive Methods of Wiener-Hopf Factorization

32 Ball and Ran

* [I factorization W().)=X+().) 0 P _ ~ q 1 X+().) is computed !:§.. follows. Set

Z=(I_QP)-1 and let c be the (p+q)x(p+q) matrix

(4.11)

Then c is Hermitian with p positive and q negative eigenvalues. and AQ has

a factorization

(4.12) * [I c = d. oP

for an invertible (p+q).x(p+q) matrix d. Then the spectral factor X+().)

for W()') in this case g; given £I

(4.13) X+()')=d{[~P ~ql + [oo-2~~A*-1 ]Z()'I_A*-1)-1[A*-1 C*,_QB]}

with inverse given III

(4.14) X +(). )-1

{[ ~ p ~ q] - [00 - 2~~ A * -1 ] ()'I-A *-1)-1Z[A *-1 C*,_QB]}d-1

PROOF. The result follows immediately from Theorem 4.1 upon

taking

Y_(X) - [~p u~J [~p K~:)l

- [!p u~J {[~p ~J + [~1(XI-A)-110'BI} Note that both Y _ ().) and

Page 43: Constructive Methods of Wiener-Hopf Factorization

Canonical factorization 33

are analytic in the complement of the unit disk D (including 00) SInce all

poles of K(A) are assumed to be in D.

5. SYMMETRIZED LEFT AND RIGHT CANONICAL SPECTRAL

FACTORIZATION ON THE IMAGINARY AXIS

Suppose that W( A) is a rational mxm matrix function analytic

and invertible on the iw-axis (including 00) which enjoys the additional

- * symmetry property W( -A) =W(A). For convenience we shall denote

- * * W(-A) by W (A) in the sequel in this section. Thus on the lw-aXlS

W(A) IS Hermitian. Since W(A) is also invertible on the iw-axis, W(iw)

must have a constant number (say p) of positive eigenvalues and q=m-p

negative eigenvalues for all real w. By a left spectral factorization of W

(with respect to the imaginary axis) we mean a factorization of the form

where Y +(A) is analytic and invertible on the closed right half plane

{ReA~O}. By a right spectral factorization of W (with respect to the

iw axis) we mean a factorization of the form

where X_(A) is analytic and invertible on the closed left half plane

{ReA:50}. The problem we wish to analyze In this section is the half

plane version of that considered in the previous section: namely given a

left spectral factorization W(A)

* right spectral factorization W(A) XjA)

resul t is the following.

o ]Y+(A), -I q

compute a

The

Page 44: Constructive Methods of Wiener-Hopf Factorization

34 Ball and Ran

THEOREM 5.1. Suppose the rational mxm matrix function

* W(A)=W (A) has ~ left spectral factorization

We may ~e that A+ and A~: = A+ -B+O+ have their spectra in the

open left half plane {ReA < a}. Let P and Q denote the unique solutions

of the Lyapunov equations

(5.1 )

Then W has ~ right spectral factorization if and only!! the matrix I-QP

~ invertible, .Q! equivalently, if and only if the matrix I-PQ ~ invertible.

When this is the ~, the factor X_(A) for ~ right spectral factorization

X_(A)

=Y + (oo){I+ ( _W(oo)-l B: +0+ P)(I-QP)-l(AI+A :)-1(0: W(oo)-QB +)}

with inverse

x (A) -1 = {I - ( - W( 00) -1 B * + 0 P) - + +

* **1 1* 1 • (AI+A+ -O+B+)- (I-QP)- (O+W(oo)-QB+)}Y +(00)- .

Page 45: Constructive Methods of Wiener-Hopf Factorization

Canonical factorization 35

PROOF. In Theorem 2.1 it was assumed that W(oo) =1 and m

that W(A)=Y JA)Y +(A) where Y Joo)=Y +(00) = 1m' We thus consider

here W(oo)-lW(A) and its left Wiener-Hopf factorization

where

From

we get

We thus define

* A -A +

B

and

c

and apply the results of Theorem 2.1 with the roles of + and -

interchanged. The Lyapunov equations (2.4) and (2.5) specialize to (5.1)

and (5.2). Thus W(oo)-lW(A) has a right Wiener-Hopf factorization if and

only if I-QP is invertible, where P and Q are the solutions of (5.1) and

(5.2). When this is the case then W(oo)-lW(A)=W +(A)W _(A) where

W + and W _ can be computed as in Theorem 2.1 (interchanging + and -),

where W +(00) = W joo) I . m

One easily sees that XjA):=

Page 46: Constructive Methods of Wiener-Hopf Factorization

36 Ball and Ran

Y + (oo)W jA) IS the factor for a right spectral factorization for W(A).

This choice of X _(A) then produces the formulas in Theorem 5.1.

For the application to the model reduction problem for

continuous time systems (see [BR 11 and [GIl), one needs to apply Theorem

5.1 to a function Y +(A) having a special form.

COROLLARY 5.2. Suppose G(A) = C(AI-A)-l B is ~ stable

ration·al pxq matrix function of Mc:Millan degree n. Thus we may assume

that the spectrum of the nxn matrix A is in the open left half plane

{ReA<O}. For a ~ positive real number, let W(A) be defined £I

W( A) = [I f 0 1 [I p G (A) aI 0

q

and ~ the nxn matrices P and Q

Lyapunov equations

(5.3) 2 2 * * A(a P) + (a P) A = BB

and

* * (5.4) A Q + QA C C.

be the unique solutions of the

Then W(A) has ~ right spectral factorization if and only if the matrix

I-QP (Q! equivalently I-PQ) ~ invertible. When this is the case, the

factor X_(A) for ~ symmetrized right canonical factorization

of W can be taken to be

with inverse given .E.l:.

Page 47: Constructive Methods of Wiener-Hopf Factorization

Canonical factorization 37

X_(A)-l ~ [~p ~-lIJ - [u-~:'l . (AI +A *)-I(I_QP)-I[C*,_u -IQBj

n

PROOF. The result follows inunediately from Theorem 3.1 upon

taking

1 [I Note that both Y +(A) and Y +(A)- = 0 p

analytic in the closed right half plane since all poles of G( A) are by

assumption in the open left half plane.

REFERENCES

[BR.I] Ball, J.A. and Ran, A.C.M., Hankel norm approximation of a rational

matrix function in terms of its realization, in Proceedings of 1985 Sympo-

sium on the Mathematical Theory of Networks and Systems (Stockholm),

to appear.

[BR.2] Ball, J.A. and Ran, A.C.M., Optimal Hankel norm model reductions and

Wiener-Hopf Factorization I: The canonical case, SIAM J. Control and

Opt., to appear.

[BGK.l] Bart, H.; Gohberg, I. and Kaashoek, M.A., Minimal Factorization of

Matrix and Operator Functions, OTI Birkhiiuser, Basel, 1979.

Page 48: Constructive Methods of Wiener-Hopf Factorization

38 Ball and Ran

[BGK.2] Bart, H.; Gohberg, I. and Kaashoek, M.A., The coupling method for solv­

ing integral equations, in Topics in Operator Theory Systems and Networks

(ed. H. Dym and I. Gohberg), OT 12 Birkhauser, Basel, 1983, 39-73.

[BGKvD] Bart, H.; Gohberg, I.; Kaashoek, M.A. and van Dooren, P., Factorization

of transfer functions, SIAM l. Control and Opt. 18 (1980), 675-696.

[CG] Clancey, K. and Gohberg, I., Factorization of Matrix Functions and Singu­

lar Integral Operators, OT3 Birkhiiuser, Basel, 1981.

[Gl] Glover, K., All optimal Hankel-norm approximations of linear multivari­

able systems and their V°-error bounds, Int. l. Control 39 (1984), 1115-

1193.

[GF] Gohberg, I.C. and Feldman, LA., Convolution Equations and Projection

Methods for their Solutions, Amer. Math. Soc. (Providence), 1974.

[GK] Gohberg, I.C. and Krupnik, N.la., Einfuhrung in die Theorie der eindi­

mensionalen singuliiren Integraloperatoren, Birkhiiuser, Basel, 1979.

[H] Helton, J. W ., A spectral factorization approach to the distributed stable

regulator p'roblem: the algebraic Riccati equation, SIAM J. Control and

Opt. 14 (1976), 639-661.

[W] Willems, J., Least squares stationary optimal control and the algebraic Ric-

cati equation, IEEE Trans. Aut. Control AC-16 (1971),621-634.

J.A. Ball Department of Mathematics Virginia Tech Blacksburg, VA 24061 USA

A.C.M Ran Subfaculteit cler Wiskunde en Informatica Vrije Universiteit 1007 Me Amsterdam The Netherlands

Page 49: Constructive Methods of Wiener-Hopf Factorization

Operator Theory: Advances and Applications, Vol. 21 ©1986 Birkhauser Verlag basel

39

WIENER-HOPF EQUATIONS WITH SYMBOLS ANALYTIC

IN A STRIP

H. Bart, 1. Gohberg, M.A. Kaashoek

The explicit method of factorization and inversion

developed in [BGKl], [BGK5] and [BGK6] is extended to a larger

class of Wiener-Hopf integral equations, namely those

with mxm matrix symbols of the form I - k(A), where k is the

Fourier tranform of a function k from the class

ewltIL7xm(R), w < O.

O. INTRODUCTION

Let W(A) = I - k(A), where k is the Fourier transform

of an mxm matrix function k, the entries of which are

integrable on the real line. When k has rational entries, the

matrix function W may be represented in the form

(0.1) W(A) = I + C(A-A)-lB, -ao < A < ao.

Here A is a square matrix of order n (where n may be larger

than m), A does not have eigenvalues on the real line, and B

and C are matrices of sizes nxm and mxn, respectively. The

representation (0.1), which is called a realization of W, has

been used (see [BGKl], Ch.IV) to give explicit formulas for a

Wiener-Hopf factorization of W in terms of the matrices A, B

and C. Also in this way the inverse of the Wiener-Hopf integral

operator with symbol W, its Fredholm characteristics and other

related properties may be expressed explicitly in terms of the

three matrices A, Band C (see [BGKl], 'Section IV.5; [BGK2]). A

In the case when k is analytic on the extended real line

(including infinity), the same ideas can be applied. The only

difference is that in the realization (0.1) for A, Band Cone

has to use bounded linear operators acting between (possibly

infinite dimensional) Banach spaces (see [BGK2], [BGK3] and

Page 50: Constructive Methods of Wiener-Hopf Factorization

40 Bart, Gohberg and Kaashoek

[BK]. The next step was taken in [BGK4], [BGK5] and [BGK6] for

certain matrix functions that are analytic in a strip around

the real line but not at infinity.

In the present paper we treat mxm matrix functions A ~Itl mxm k such that k belongs to the class e L1 (R) for some

~ < O. Again the starting point is the representation (0.1),

but now the operators A,B and C have the following properties.

The operator -iA, which acts in a Banach space X, is a

(possibly unbounded) exponentially dichotomous operator of

exponential type ~ (see Section 1.1 for the definition of these

notions), the operator B from ~m into X is a (bounded) linear

operator and C is an A-bounded linear operator between X

and ~m such that the linear tranformation

(0.2) iCe-itA(I_P)x,

-iCe -i tApx,

t > 0,

t < 0,

maps the domain of A into the space of all functions

in e~ltIL~(R) with a derivative in L~(R), and extends to a

bounded linear operator defined on X with values

in e~ltIL~(R). The operator P appearing in (0.2) is the

separating projection for the exponentially dichotomous

operator -iA (see Section 1.1).

To use the method of factorization and inversion

referred to above for the class of functions considered in the

present paper, the main difficulty is to prove the following

result. If det W(~) * 0 for -~ < ~ < ~, then

-1 x -1 wO) = I - CO-A) B, -~ < ~ < ~,

x x where A = A-Be, the operator -iA is an exponentially

dichotomous operator, the operator C is AX-bounded, the map

(0.2) remains bounded (for ~ sufficiently close to zero) if A

and p are replaced by AX and the separating projection pX of

_iA x , respectively, and, finally, the difference p_px is a

Page 51: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip 41

compact operator.

The theorem which gives the special representation

(0.1) for the matrix functions considered in this paper, and

the results mentioned in the previous paragraph, are proved in

the first chapter of the paper. In the second chapter we give

applications to inverse Fourier transforms, Wiener-Hopf

factorization, the Riemann-Hilbert boundary problem, and the

inversion and Fredholm properties of Wiener-Hopf integral

operators. The applications are established along the same

lines of reasoning as in [BGK4], [BGK5] and [BGK6]. However, in

the present context also it was necessary to overcome several

new technical difficulties.

1. REALIZATION

1.1. Preliminaries

Let X be a complex Banach space and let S be a linear

operator defined on a linear subspace V(S) of X with values in

X, written seX + X). We say that S is exponentially dichotomous

if S is densely defined (i.e., V(S) is dense in X) and X admits

a topological direct sum decomposition

( 1. 1) X

with the following properties: the decomposition reduces 5, the

restriction of -5 to X_ is the infinitesimal generator of an

exponentially decaying strongly continuous semigroup, and the

same is true for the restriction of S to X+. These requirements

determine the decomposition (1.1) uniquely and the projection

of X onto X_ along X+ is called the separating projection for

S. In case S is bounded, S is exponentially dichotomous if and

only if the spectrum o(S) of S does not meet the imaginary axis

and then the separating projection is just the Riesz projection

corresponding to the part of o(S) lying in the right half

plane. In general, the condition that S is exponentially

diochotomous involves a more complicated spectral splitting of

the (possibly connected) extended spectrum of S. The details

Page 52: Constructive Methods of Wiener-Hopf Factorization

42 Bart, Gohberg and Kaashoek

(including a characterization of exponentially dichotomous

operators in terms of two-sided Laplace transforms) may be

found in [BGK4] or [BGK6].

Suppose S(X + X) is exponentially dichotomous, and let

(1.1) be the decomposition having the properties described

above. With respect to this decomposition, we write

s [:- :J The bisemigroup E(.jS) generated ~ S is then defined as

follows:

(1.2) E(tjS)x {( -e tS Px,

tS+ e (I-P)x,

t < 0,

t > O.

Here P is the separating projection for S. The operator Swill

sometimes be referred to as the bigenerator of E(.jS). Note ~."

that the function E(.jS) takes its values in L(X), the Banach

space of all bounded linear operators on X.

From standard semigroup theory (cf. [HP], [P]) it is

known that there exists a real constant w such that

(1.3) wt tS_

sup e n e < co, -wt tS+

sup e He n < co.

t~O t~O

In the present situation (exponential decay), we may take the

constant w negative. If (1.3) is fulfilled, we say that S (or

the bisemigroup generated by S) is of exponential ~ w. Note

that (1.3) is equivalent to

(1. 4) sup ewltlOE(tjS)1 < co.

t#O

For later use we recall a few simple facts about

bisemigroups. Suppose S(X + X) is exponentially dichotomous,

and let E(.jS) be the corresponding bisemigroup given by (1.2).

Take x € X. The function E(.jS)x is continuous on R\{O} and

exponentially decaying (in both directions). It also has a jump

Page 53: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip

(discontinuity) at the origin and in fact

lim E(t;S)x t+O-

-Px, lim E(t;S)x t+O+

43

(I-P)x,

where P is the separating projection for S~ If x belongs to the

domain V(S) of S, then E(.;S)x is differentiable on ~\{O} and

d dt E(t;S)x E(t;S)Sx SE(t;S)x, t *" O.

Obviously, the derivative of E(.;S)x is continuous on R\{O},

exponentially decaying (in both directions) and has a jump at

the origin. From (1.2) it is clear that

E(t;S)P PE(t;S) E(t;S), t < 0,

E(t;S)(I-P) (I-P)E(t;S) E(t;S), t > O.

Moreover, the following semigroup properties hold:

E(t+s;S) -E(t;S)E(s;S), t,s < 0,

E(t+s;S) E(t;S)E(s;S), t,s > O.

1.2. Realization triples

Let m be a positive integer. By V~(R) we denote the

linear subspace of L7(R) = L1(R;~m) consisting of all

f E L7(R) for which there exists g E L~(R) such that

a.e. on (-00,0],

( 2. 1 ) f(t)

a.e. on [0,00).

If f E V7( R), then there is only one g E L7( IR) satisfying

(2.1). This g is called the derivative of f and denoted by f'.

We also stipulate

Page 54: Constructive Methods of Wiener-Hopf Factorization

44 Bart, Gohberg and Kaashoek

0 00

f (0) f g(s)ds, f+ (0) - f g(s)ds, - -00 0

which implies that

00 00

(2.2) f_(O) - f+(O) = f g(s)ds f f'(s)ds. -00 -00

We shall encounter this identity in Section 4 below.

Let X be a complex Banach space, let A(X + X),

B: ~m + X and C(X + ~m) be linear operators and let 00 be a

negative constant. We call e = (A,B',C) a realization triple ~

exponential ~ 00 if the following conditions are satisfied:

(1) -iA is exponentially dichotomous of exponential

type 00,

V(C) ;J V(A) and C is A-bounded, ( 2)

(3) m there exists a linear operator Ae: X + L1(R) such that

(i) sup nx! (1

7 e-ooltluAeX(t)Udt < 00,

-00

m (ii) Ae mapsV(A) intoV1(1R) and

hax = iCE(.;-iA)x, X € V( A) •

Note that B, being a linear operator from ~m into X, is

automatically bounded. Observe also that (i) implies that Ae is m bounded and maps X into Ll (R), where

, 00

(2.3) L ml (R) ,00

Taking into account (ii) and the fact that V(A) is dense in X,

one sees that Ae is determined uniquely. Since 00 is negative,

L7,00(1I() given by (2.3) is a linear subspace of L7(R). The space

X is called the state space, the space ~m the input/output

space of the triple a.

Suppose e is a realization triple of exponential type

00 and 00 S 00 1 < O. Then e is a realization triple of exponential

Page 55: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip

type wI too. To see this, note that (1) and (i) are fulfilled

with 00 replaced by wI. When the actual value of 00 is

irrelevant, we simply call e a realization triple. So e =

(A,B,C) a realization triple (without further qualification)

if a is a realization triple of exponential type 00 for some

45

00 < O. As we saw, the operator Ae does not depend on the value

of 00, and the same is true with regard to the separating

projection for -iA. This projection will be denoted by Pee

In [BGK4], [BGK5] and [BGK6] we have been dealing with

the situation where -iA(X + X) is exponentially dichotomous,

B: ~m + X is a (bounded) linear operator and C: X + ~m is a

bounded linear operator too. It is of interest to note that

under thase circumstances a = (A,B,C) is a realization triple.

In fact, if -iA is of exponential type 00, then a is a

realization triple of exponential type wI whenever

00 ~ wI < O. To see this, define Aex for each x € X by Aex =

iCE(.;-iA)x and use the inequality (1.4).

Let e = (A,B,C) be a realization triple with state

space X. The projection Pe of X associated with e is defined in m terms of A alone, the operator Ae from X into L1 (R) is

completely determined by A and C. Next we introduce another m operator, namely ra: L1 (R) + X, which depends only on A and B.

The definition is

re' f E(-tj-iA)B,(t)dt. -CI>

It is easy to see that ra is well-defined, linear and bounded.

Note in this context that because of the finite dimensionality

of ~m, the operator function E(.j-iA)B is continuous on R\{O}

with a (possible) jump at the origin, all with respect to the

operator norm topology.

PROPOSITION 2.1. Let a J!!...!. realization triple .£!.. exponential.!ll!.. 00, and let ra J!!.~ above. Then ra ~ compact

and ~ ~(R) into V(A).

PROOF. The compactness of ra has already been

established in [BGK5], Lemma 3.2. So we shall concentrate on

Page 56: Constructive Methods of Wiener-Hopf Factorization

46 Bart, Gohberg and Kaashoek

the second assertion of the proposition. m

Take ~ in V1(R). We need to show that re~ € V(A). For

simplicity we restrict ourselves to the case when ~ vanishes

almost everywhere on (-~,Ol. Write

~(t) - f W(s)ds, t

m where W € L 1(R). Then

t > 0,

re~ = - f(f E(-t;-iA)Bw(s)ds)dt o t

and applying Fubini's theorem we get

~ s re~ = - f(f E(-t;-iA)BW(s)dt)ds.

o 0

Since A is exponentially dichotomous, the origin belongs to the

resolvent set of A. So it makes sense to consider the operator

function iE(-t;-iA)A- 1 B. This function is differentiable

on [O,~) and its derivative is the continuous operator function

-E(-t;-iA)B. (Pointwise this is clear from the results

collected together in Section 1.1; next use the finite

dimensionality of ~m). Thus

s -f E(-t;-iA)Bdt o

and consequently

-1 -1 re~ = f (iE(-s;-iA)A B - iPeA B)w(s)ds

o

A- 1 [j(iE(-S;-iA)B - iPeB)w(s)ds]. o

But then re~ € ImA- 1 = V(A). 0

We conclude this section with the following

observation concerning reo Let Q be the projection of L~(R)

onto L7[0,~) along L7(-~,01, where L7(R) is identified in the

Page 57: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip

m m usual way with Ll(-~,Ol $ Ll[O,~). Then it is clear from the

properties of bisemigroups mentioned in the last paragraph of

Section 1.1 that

(2. 4) (I-p)reQ 0, pr e( I-Q) O.

m m In other words, re maps Ll[O,~) into ImP and Ll(-~,Ol. into

KerP.

1.3. The realization theorem

47

Suppose e = (A,B,C) is a realization triple of

(negative) exponential type w. For fixed y in ~m (the

input/output space of e), we have that AeBy € L7,w(R). Thus the

expression

(3.1) a. e. on R

mXm determines a unique element ke of Ll w(R), the linear subspace

m~ , m~ of L~xm( R)

e-wl.lh(.) Ll(R;~ ) consisting of all h € Ll (R) for which

mxm € Ll (R). We call ke the kernel associated with e.

mxm mxm . s~nce ke € L1 ,w(R) c L1 (R), the Four1er

transform ke of ke is an analytic mxm matrix function on the

strip IImAI< -We More explicitly, the following can be said.

THEOREM 3.1. Let e be a realization triple of

exponential ~ w. Then

(3.2) A -1 ke(A) = -C(A-A) B, lImA I < -We

The A-boundedness of C implies that C(A-A)-1 is a

well-defined bounded linear operator depending analytically on

A in the strip I ImA I < -We

PROOF. It suffices to show that for x € X and

IImAI < -w

(3.3) -1 C( A-A) x

DO

- J -DO

-1 i.e., -C(A-A) x is equal to the Fourier transform Aex of Aex.

Page 58: Constructive Methods of Wiener-Hopf Factorization

48 Bart, Gohberg and Kaashoek

-1 Take IImAI < -00. As was observed already. C(A-A~ is

a bounded linear operator. Now consider the mapping x + Aex(A)

from X into ~m. This mapping ii linear and bounded too.

Linearity is obvious and boundedaess follows from the estimate

aiex(A)n S 7 e-wltIIAeX(t)Udt, -m

together with condition (i) in Section 1.2. So it is enough to

check the identity (3.3) for vectors x belonging to the dense

linear subspace V(A) of X.

Take x € V(A) and put z

(1.5) in [BGK6]

m

Ax. According to formula

-1 (A-A) z J iAt -i e E(t;-iA)zdt.

Recall that CA- 1 is a bounded linear operator. It follows that

-1 C(A-A) x

m

-i J eiAtCE(t;-iA)xdt

the latter equality holding by virtue of condition (ii) in

Section 1.2. 0

Representations of the type (3.1) will play an

important role in this paper. They are called (spectral)

exponential representations. Instead of (3.1) we shall also

write

(3.4) ke(t) = iCE(t;-iA)B.

Note, however, that the latter expression has to be understood

Page 59: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip

in the right way involving condition (ii) in Section 2.

Functions of the type (3.4) with Band C bounded were

considered in [BGK4], [BGKSl and [BGK61.

49

An identity of the type (3.2) is called a realization.

This notion is taken from systems theory (cf. [K], KFA1, [Ka]

and [BGK1]). The realizations appearing in the present paper

feature not only a (possibly) unbounded state space operator A

but also a (possibly) unbounded output operator C. However, C

and A, are related in the way described in Section 1.2.

1.4. Construction of realization triples

The kernel associated with a realization triple of

exponential type w belongs to L7 x:(R). In this section we , mxm

shall see that, conversely, each k € L1 (R) appears as such a , w

kernel. An explicit construction is contained in the next

theorem. mxm

THEOREM 4.1. Let k € L1 ,W(IR), where m...!!!...!!.. positive

integer and w is .!!.. negative constant. Then k...!!!.. the kernel

associated with .!!.. realization triple .£.!.. exponential type w. In

fact k = ke' where the realization triple e = (A,B,C) with m m state space X = L1 (1R) and input/output space ~ is defined as

follows':

V(A) V (C) m = V1(1R),

{-lWf(t) + Af(t)

iwf(t) +

By(t) = e -wltlk(t)y,

Cf = if_(O) - if+(O).

if'(t), a.e. on (-00,0],

if'(t), a.e. on [0,00),

a.e. on IR,

m Recall from Section I.2 that for f € V(A) = V1(1R) the

left and right evaluations f_(O) and f+(O) at the origin are

well-defined. In view of (2.2) one can rewrite the expression

for Cf as

Page 60: Constructive Methods of Wiener-Hopf Factorization

50 Bart, Gohberg and Kaashoek

00

( 4. 1 ) Cf i J f'(t)dt. -00

PROOF. As is well-known, the backward translation m semigroup on L 1 [0,00) is strongly continuous. The infinitesimal

m generator of this semigroup has V 1[0,00) as its domain and its

action amounts to taking the derivative (cf. the first

paragraph in Section 1.2). Using this one sees that -iA an

exponentially dichotomous operator of exponential type wand

that the bisemigroup associated with -iA acts as follows: For

t < 0

E(t;-iA)f(s) 1 -wt -e f(t+s),

o ,

a. e. on (- 00, 0] ,

a.e. on [0,00);

for t > 0

E(t;-iA)f(s) = f w~ le f(t+s),

a. e. on (- 00, 0] ,

a • e. on [0,00).

The separating projection for -iA is the projection of L~(~) m m

onto L 1 (-00,0] along L 1 [0,00).

(4.2)

m Define A: X + Ll(~) by

a.e. on R.

Then A satisfies the conditions (i) and (ii) of Section 1.2

with Ae replaced by A. For (i) this is obvious. As to the first m part of (ii), observe that f € V 1 (R) and w < 0 implies that

ewl·1f(.) € V~(R) too. To check the second part of (ii), one

uses the above description of the bisemigroup E(t;-iA) and the

definition of C or (4.1). From the latter expression it is also

clear that nCfn S -wnfn+HAfn, and so C is A-bounded. mxm

Since k € Ll (R), the operator B is well-defined, , w linear (and bounded). Thus e = (A,B,C) is a realization triple

of exponential type w. We claim that the kernel ke associated

with e coincides with k. Indeed, for y € ~m the following

identities hold a.e. on R:

Page 61: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip

k(t)y.

Since ~m is finite dimensional, it follows that ke(t) = k(t)

a.e. on R. In other words ke and k coincide as elements of

Lmxm(R). 0 1, w

REMARK 4.2. Let e = (A,B,C) be the realization triple

constructed in the preceding theorem. As we have already seen,

the projection Pe associated with e is the projection of L~(~) m m

onto L 1 (-oo,O] along L 1 [O,oo), Le., P e = I-Q, where Q is as in

51

the last paragraph of Section 1.2. Also Ae is the bounded

linear operator acting on L~(R) defined by (4.2). Finally re is m

the bounded linear operator on L 1 (R) given by

o _e wt J k(t+s)tj>(-s)ds, a.e. on (-00,0],

re.p(f) 00

e - wt f k(t+s)tj>(-s)ds, a.e. on [0,00), -00

(cf. formula (2.4) in Section 1.2 and formulas (2.8) and (2.9)

in Section 11.2 below).

Theorem 4.1 is related to certain results on infinite

dimensional realization in systems theory (see, for example,

[FJ, Ch.IrI, Section 6 and [CG]).

1.5. Basic properties of realization triples

It is convenient to introduce the following notation.

Let e (A,B,C) be a realization triple. The linear operator

A-BC, having V(A) as its domain, will be denoted by AX. Note

that AX does not only depend on A, but also on Band C. By eX

we indicate the triple (Ax,B,-C). For reasons that will become

clear later, eX is sometimes called the inverse of e. For what

follows, it is essential to know under what circumstances eX is

again a realization triple.

THEOREM 5.1. Let e = (A,B,C) ~~ realization triple.

The following statements are equivalent:

(i)

(ii)

X . e ~ ~ reali zation tr1ple,

-1 det(I + CO-A) B) does ~ vanish .£!!.. the real line,

Page 62: Constructive Methods of Wiener-Hopf Factorization

52 Bart, Gohberg and Kaashoek

x (iii) A has ~ spectrum ~ the real line,

x (iv) -iA is exponentially dichotomous.

Condition (iii) simply means that for each real A, the

linear operator A_AX maps V(A x) V(A) in a one-one way onto X.

PROOF. Suppose AX has no spectrum on the real line. x -1

Then for each real A, the linear operator I - C(A-A) B acting

on ~m is well-defined. A straightforward computation shows that

it is the inverse of I + C(A-A)-l B• Conversely, if

W(A) = I + C(A-A)-l B is invertible, then

( 5 • 1 )

This proves the equivalence of (ii) and (iii). By definition

(1) implies (iv) and it is obvious that (iv) implies (iii). It

remains prove that (ii) and (iii) together give (i).

Suppose (ii) and (iii) are satisfied. Taking A 0 in

(5.1), one gets (A x)-l = A-I + A- 1BW(O)-l CA-1, where

W(O) = I-CA- 1B. Since C is A-bounded, the operator CA- 1 is x -1 x

bounded, and hence (A) is bounded. It follows that A is

closed. Also, C(Ax)-l = CA- 1 + CA-1BW(O)-lCA- 1 is bounded, and

from this the AX-boundedness of C is immediate.

With W(A) as in the first paragraph of this proof, we A mxm

have W(A) = I - ke(A), where ke € L1,w(R). By Wiener's theorem x mxm -1 AX

there exists k € L1 (R) such that W(A) = I - k (A). Taking

1001 smaller (if necessary), we may assume that k X € Lmxm(R) 1,00

too (cf. [GRS]). But then _iA x is exponentially dichotomous of

exponential type w. The proof of this is based on Theorem 4.1

in [BGK6] and (5.1). The argument goes along the lines

indicated in Part III of the proof of Theorem 7.1 in [BGK6].

Analogously to what we have there, the bisemigroup generated x

by -iA is given by

Page 63: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip 53

00

E(tj-iAx)x = E(tj-iA)x + f E(t-sj-iA)BAex(S)ds -00

00 00

- f( f E(t-S-U)BkX(u)e A x(s)du)ds. -00 -00

We leave the details to the reader.

The negative constant 00 having been taken sufficiently

close to zero, one has that e is of exponential type 00 and x mxm

k € Ll (R). A standard reasoning now shows that the ,00 x

convolution product k *Aex,

00

(k*Aex)(t) = f kX(t-s)Aex(S)dS, a. e. on R, -00

m determines a (bounded) linear operator from X into L 1(R) such

that

sup IlxD~I-oo

j e- lil / t /lI(k*A e X)(t)lldt -00

But then the expression

(5.2)

< 00.

defines a (bounded) linear operator AX X + L~(R) for which

condition (i) in Section 1.2, with Ae replaced by AX, is

satisfied.

Next, take x € X and consider the x ~x AX A

A x. Clearly, A (~) = -(I - k (~»)Aex(~)

Fourier transform of _I A

-W(~) Aex(~). As we -1 x -1

have seen above W(~) = I - C(~-A) B. Also we know from A -1

(3.3) that Aex(~) = -C(~-A) x. Hence -x x -1 -1 A x(~) = (I - C(~-A) B)C(~-A) x. It follows that

( 5 • 3) -x x -1 A x(~) = C(~-A) x.

Here ~ is real (or, more generally, /Im~/< -00).

x x -1 Take x € V(A ) = V(A), and write x = (A) z. Then

x -1 x -1 x -1 x -1 C(~-A) x = C(A) (~-A) z, where C(A) is a bounded

linear operator from X into ~m. Using (5.3) and formula (1.5)

Page 64: Constructive Methods of Wiener-Hopf Factorization

S4 bart, Gohberg and Kaashoek

in [BGK6], one gets

i At x e E(tj-iA )zdt

-co

co

- i iAt x f e CE(tj-iA )xdt.

But then we may conclude that

a.e. on 1'1

x m It remains to prove that A x € VI (R).

In view of the properties of Ae and the identity x m

(5.2), it suffices to show that k *Aex belongs to Vl(~)'

Considering Aex as a function, we may write

!-~ g(s)ds, t < 0,

Aex(t)

-f g(s)ds, t > 0, k

where g € m

L1(1). Put

h = kX*g _ ( f g(s)ds)k.

Then h € m

L 1 (R) and

,) t

f h(s)ds, -co

(kx*Aex)(t)

l co

-f h(s)ds, t

a. e. on (- co, 01 ,

a. e. on [0, co) •

x x REMARK 5.2. Suppose e = (A,B,C) and e = (A ,B,-C) are

both realization triples. Then it is clear from the proof of x

Theorem 5.1 that Ae and Ae are related by

co

-Aex(t) + f kX(t-s)Aex(S)dS, a. e. on R,

Page 65: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip 55

A 1 where (I - ke(A))- = I

AX - k (A). For typographical reasons we

x x wrote Ae instead of Ae.

x Similar notations (such as P e and

x k e ) will be used below.

x x THEOREM 5.3. Suppose e = (A,B,C) and e = (A ,B,-C)

are realization triples. Then the associated projections Pe and x

P e ~ related ~

(5.4)

x and Pe - Pe ~~ compact linear operator.

PROOF. By Proposition 2.1, the operator re is compact.

So it suffices to establish (5.4), and for this it is enough to

show that the identity reA~X = P~x - Pex holds on the domain of

A. For x € V(A), we have A;X = -iCE(. ;-iA x)x. Using this, the

desired result is obtained along the lines indicated in the

proof of [BGK5j, Lemma 3.2. 0

II. APPLICATIONS

11.1. Inverse Fourier transforms

In this section we shall give an explicit formula for

the inverse Fouriet transform of a function of the type

( A) -1 m xm I - I-k(A) ,where k € Ll (R) with w < O. Recall from

, w Section 1.4 that Lm

1xm (R) coincides with the class of all , w

kernels of realization triples of exponential type w.

THEOREM 1.1. Let e = (A,B,C) ~~ realization triple.

Then det (I-ke ( A)} does not vanish ~ the real line 2L and ~

if eX (AX ,B,-C) ~~ realization triple, and ~ that ~

(I - ke()..)}-l = I - x ke(A), ).. € R,

where k X is the e --- kernel associated with

x e , ~J

x x ke(t)y AeBy(t), a.e. on R.

Less precise, the latter identity can be written as

Page 66: Constructive Methods of Wiener-Hopf Factorization

56 Bart, Gohberg and Kaashoek

k;(t) = -iCE(t;-iAx)B. The condition that eX is a realization

triple can be replaced by any of the equivalent conditions in

Theorem 5.1 of Chapter I.

PROOF. The first part of the theorem is immediate from

Theorems 3.1 and 5.1 in Ch. I. To prove the second part, assume A

that det(I-ka(A») does not vanish on the real line, and hence X X x m

e (A ,B,-C) is a realization triple. Let k € L1(~) be such A -1 x x

that (I - ka(A») = I - k (A), A € R. The existence of k is

guaranteed by Wiener's theorem. Now, for A on the real line,

x -1 x I - C(A-A) B = I - ka(A),

x x and hence k (t) = ke(t) a.e. on R.D

Theorem 1.1 can be reformulated in terms of full line

convolution integral operators.

THEOREM 1.2. Consider the convolution integral m

operator ~ L1 (R) defined.1L

CD

f ke(t-s)Ij>(S)ds, a.e. on R, -CD

where ka .!.!. the kernel.£!. the realization triple a = (A,B,C).

Then I-L is invertible l.!. and ~l.!. aX = (Ax,B,-C) is a

realization triple and the inverse of I-L.!.!. given .1L

CD

f k;(t-S)1/l(S)ds, a.e. on R, -CD

a.e. on R.

The condition that aX is a realization triple can be

replaced by any of the equivalent conditions in Theorem 5.1 of

Ch. I. In a concise manner, the conclusion of Theorem 1.2 may

be phrased as (I-L)-l = I_LX, where LX stands of course for the

(full line) convolution integral operator associated with eX.

Page 67: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip

II.2. Coupling

In Sections 11.4 and 11.5 below we want to apply the

coupling method developed in [BGK3]. The next result contains

the key step in this direction.

THEOREM 2.1. Suppose a = (A,B,C) and aX

are realization triples, and introduce

00

X (A ,B,-C)

57

m m KH t) J ka(t-s)Hs)ds, [0,00), K L1 [O,oo) + L1 [O,oo), a.e. on

0

00 X m m KX~(t) J

X [0,00) , K : L1 [O,oo) + L1 [O,oo), ka(t-s)~(S)ds, a.e. on

0

X m Ux(t) Aax(t), [0,00) , U : ImP a + L1 [O,oo), a.e. on

x m UXx(t) x

[0,00) , U : ImP e + L1 [O,oo), -Aax(t), a.e. on

00 R L~[O,oo) + ImP a , R~ J E(-t;-iA)B~(t)dt,

0

00 x m x RX~ J

x R : L1[O,oo) + ImP a , - E(-t;-iA )B~(t)dt,

0

J : x

ImP a + ImP e , Jx Pax,

x J : ImP a +

x ImP a , JXx = x

Pax,

m where m 2:.!. the dimension 2.!. the (common) input/output space ({;

of a and eX. Then all these operators ~ well-defined, linear

and bounded. Moreover

is invertible with inverse

In terms of [BGK3], the theorem says that the

operators I-K and JX are matricially coupled with coupling

Page 68: Constructive Methods of Wiener-Hopf Factorization

Bart, Gohberg and Kaashoek

relation

( 2. 1 )

The operator JX is also called an indicator for I-K. Note that

K is the Wiener-Hopf integral operator with kernel ke• Analogously KX is the Wiener-Hopf integral operator with kernel

x k e •

PROOF. All operators appearing in Theorem 3.1 are

well-defined, linear and bounded, and acting between the

indicated spaces. In this context three observations should be m m made. First, let Q be the projection of Ll (R) onto Ll [0,00)

m I x x m along L1 (-oo,O]. Then U = QAe ImP e ! ImP e + L1 [O,oo) and x xI m U = -QA e ImPe ! ImP e + L1 [O,oo). Second, viewing Pe as an

operator from X into ImPS' we have

and, similarly,

Proving Theorem 2.1 that is checking the coupling

relation (2.1), amounts to verifying eight identities. Pairwise

these identities have analogous proofs. So actually only four

identities have to be taken care of. These will be dealt with

below.

First we shall prove that

(2.2) O.

Take ~ in L~[O,oo). We need to show that RKx~

Whenever this is convenient, it may be assumed that ~ is a

continuous function with compact support in (0,00).

Applying Fubini's theorem, one gets

Page 69: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip

For s > 0,

(2.3)

~ ~

RKx~ f(f E(-tj-iA)Bk;(t-S)~(S)dS)dt o 0

~ ~

f(f E(-tj-iA)Bk;(t-s)~(s)dt)dS. o 0

consider the identity

~

f E(-tj-iA)BA;x(t-s)dt = E(-sj-iA)x - PeE(-Sj-iAx)X. o

x To begin with, take x € V(A) = V(A ). Then, for t ; o,s,

d x ) x Jt(E(-tj-iA)E(t-Sj-iA)X = iE(-tj-iA)BCE(t-sj-iA )x

x -1 x x = iE(-tj-iA)BC(A) E(t-sj-iA)A x.

59

x -1 Because C(A) is bounded, the last expression is a continuous

function of t on the intervals [O,s] and [s,~). It follows that

(2.3) holds for x € V(A). The validity of (2.3) for arbitrary

x € X can now be obtained by a standard approximation argument

based on the fact that V(A) is dense in X and the continuity of

the operators involved. Substituting (2.3) in the expression x

for RK ~, one immediately gets (2.2).

(2.4)

Next we deal with the identity

RU x + JJx = I ImPs·

Take x in ImP e • Then

(2.5) x

RU x - J E(-tj-iA)BA;x(t)dt. o

Apart from the minus sign, the right hand side of (2.5) is

exactly the same as the left hand side of (2.3) for s = O. It

is easy to check that (2.3) also holds for s = 0, provided that x

the right hand side is interpreted as -PSx + PeP eX. Thus x x. x x

RU x = Pex - PePex = x - PePex, and (2.4) is proved.

In the third place, we shall establish

(2.6) (I-K)U x + UJ x = O.

Page 70: Constructive Methods of Wiener-Hopf Factorization

60 Bart, Gohberg and Kaashoek

x x Take x € ImPS. Then U x = -QASx, where Q is the projection of

m m m L1 (R) onto Ll[O,~) along Ll(-~,Ol. Here the latter two spaces

m are considered as subspaces of L1(R). Observe now that x x x x

QAax = Aa(I-Ps)x. For x € tXA) = P(A ) this is evident, and for

arbitrary x one can use an approximation argument. Hence x x x

KU x ~ Qh, where -h = ke * Ae(I-Pe)x is the (full line) x x

convolution product of ke and Ae(I-Pe)X. Taking Fourier

transforms, one gets

-1 x -1 x h( A) = C( A-A) BC( A,-A) (I-P a)x

-1 x x -1 x C(A-A) (I-Pe)x - C(A-A) (I-Pa)X.

x x Put g ~ U x + UPex. Then g

x Qg. Also g

Ae(I-Pe)Pex, and hence

gO)

Since x € ImP e , it follows that

... hO) - gO)

... x So h(A)-g(A) is the Fourier transform of -AePe(I-Pa)x. But then

x x . h-g = -AePa(I-Pe)x = -(I-Q)Ae(I-Pe)x. Applying Q, we now get

x x x Qh = Qg = g. In other words, KU x = U x + UPSx for all x € X,

which is nothing else than (2.6).

Finally, we prove

(2.7) I.

Let L be the (full line) convolution integral operator

associated with a, featuring in Theorem 1.2. Since e and eX are

both realization triples, the operator I-L is invertible and -1 x x

(I-L) ~ 1-L , where L is the convolution integral operator x

associated with e • With respect to the decomposition m m m L1 (R) = Ll[O,~) ~ Ll(-~,Ol, we write I-L and its inverse in the

Page 71: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip 61

form

I-K x

I-L I-L x

Clearly (I-K)(I-K x ) + L_L~ x

suffices to show that L_L+

1. So, in order to prove (2.7), it x

UR •

Suppose, for the time being, that

(2.8)

(2.9)

As was observed in the fourth paragraph of the present proof,

(2.3) also holds for s = 0, that is

00

f E(-tj-iA)BA;x(t)dt = Pe(I-P;)X. o

Analogously, one has

° f E(-tj-iA)BA;x(t)dt -00

Hence

It remains to verify (2.8) and (2.9).

Let us first prove (2.8) for the case when 1mB c V(A). Then we can write B in the form B = A- 1B1, where B1 : ~m + X is

a (bounded) linear operator. Wr~te C1 = CA- 1• Then C1 : X + ~m

is a bounded iinear operator too. Also, for each y E ~m,

iCE(tj-iA)By a.e. on IR.

Page 72: Constructive Methods of Wiener-Hopf Factorization

62 Bart, Gahberg and Kaashoek

Since ~m is finite dimensional, we may assume that

t 'I: O.

m m Take ._ € Li(--,Ol. Then L_._ € LI[O,-), and almost everywhere

on [0,-)

o - J ke(t-s)._(s)ds

--o - J iCIE(t-s;-iA)BI._(s)ds.

--Next, use the semigroup properties of the bisemigroup E(.;-iA)

mentioned in Section 1.1. It follows that almost everywhere on

[0,-) o

-iCIE(t~-iA) J E(-s;-iA)BI._(s)ds

--o = -iCE(t;-iA) J E(-s;-iA)B._(s)ds. --

Thus L_._ = QL_._ = -QAere._. We have proved (2.8) now for the case when 1mB c V(A).

The general situation, where 1mB need not be contained in V(A)

can be treated with an approximation argument based on the fact

that B can be approximated (in norm) by (bounded) linear

operators from ~m into X having their range inside V(A). This

is true because V(A) is dense in X and ~m is finite

dimensional. The proof of (2.9) is similar. D

11.3. Inversion and Fredholm properties

In this section we study inversion and Fredholm

properties of the Wiener-Hopf integral operator K,

(3. I) -J k(t-s)t(s)ds, o

a. e • on [ 0 , -) •

It will be assumed that the mxm matrix kernel k admits a

spectral exponential representation. This implies that m k € LI(R), and so K is a well-defined bounded linear operator

m on LI[O,-).

Page 73: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip

THEOREM 3.1. Assume the kernel k ~ the integral

operator K given.& (3.1) ~ the kernel associated with the

realization triple e = (A,B,C),.!..!..!..:.., k = k e • Then I-K ~~ x x

Fredholm operator ..!..!.. and ~..!..!.. e = (A ,B,-C) ~~

realization triple, and ~ that ~ the following statements

hold true: ------

(i) x x

ImP e n KerP e ~ finite dimensional, ImP e + KerP e is

closed with finite codimension ~ the (common) state x

space of e and e , and

ind(I-K) x x

dim(ImP e n KerP e ) - codim(ImP e + KerP e ),

(11) ~ function 4> belongs ~ Ker(I-K) ..!..!.. and ~..!..!.. there x

exists a (unique) x € ImP e n KerP e such that

a • e • on [0,00),

x (iii) dim Ker(I-K) = dim(ImP e n KerP e ),

(iv)

(v)

m a function 1/1 ~ L 1 [0,00) belongs to Im(I-K) ..!..!.. and

~..!..!..

f E(-t;-iA x )B1/I(t)dt € ImP e + Kerp~, o

x codim 1m(I-K) = codim(1mP e + KerP e ).

The condition that eX (AX,B,-C) is a realization

triple may be replaced by any of the equivalent conditions in

Theorem 5.1 of Ch.1 or by the condition that det(1-k(A») does

not vanish on the real line.

PROOF. The proof of the if part, including that of

03

(i) - (v), amounts to combining Theorem 2.1 of the previous

section, Theorem 5.3 in Ch.I and the results obtained in

[BGK3], Section 1.2. For details, see [BGK5], first part of the

Page 74: Constructive Methods of Wiener-Hopf Factorization

64 Bart, Gohberg and Kaashoek

proof of Theorem 2.1.

To prove the only if part of the theorem, one may

reason as follows. From [GK] it is known that I-K is Fredholm A

if and only if det(I-k(A») does not vanish on the real line. By

Theorems 3.1 and 5.1 in Ch.I, the latter condition is

equivalent to the requirement that eX is a realization triple.

It is also possible to use a perturbation argument as in

[BGK5], second part of the proof of Theorem 2.1.0

Next we consider the special case when I-K is

invertible.

THEOREM 3.2. Assume the kernel k ~ the integral

operator K defined & (3.1)..!.!. the kernel associated with the

realization triple e = (A,B,C), Le., k = k e • Then I-K..!.!.

invertible ..!.!.. and ...2E..!L..!.!.. the folowing two conditions are

satisfied:

(1) (AX,B,-C) is a realization triple,

( 2) x

X ~X..!.!.the (common) state space~e and s. If (1) and (2)

hold, then the inverse..£!.. I-K ..!.!. given &

00 00 -1 f X [f X X] (I-K) ~t) = ~(t) - ke(t-s)~(s)ds - Ae(I-IT)E(-S;-iA )B~(s)ds (t)

o 0

(.!..:...!:... ~ [0,00», where IT..!.!. the projection ~ X ~ x

KerPe along ImPS.

A somewhat different expression for the inverse of I-K

will be given at the end of Section 4 below. Analogous results

can be obtained for left inverses, right inverses and

generalized inverves (cf. [BGK5], Theorem 2.2 and the

discussion thereafter). Also here, the condition that eX is a

realization triple may be replaced by any of the equivalent

conditions in Theorem 5.1 of Ch.I or by the condition that A

det(I-k(A») does not vanish on the real line.

PROOF. The first part of the theorem is immediate from

Page 75: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip

Theorem 3.1. With regard to the second part, we argue as

follows.

Suppose (1) and (2) are satisfied. Then I-K is

invertible and by virtue of Theorem 2.1 in Ch.I of [BGK3], its

inverse is given by

(I-K)-1

Here KX, UX J X, and R x are as in Theorem 2.1. So, for m

1jJ € L1 [O,oo),

where Q is the projection of L~(R) m onto Ll [0, 00) along

m Ll (-00, 0]. The desired result is

x -1 x (J) Pe is the projection of X

now clear from the fact that x

onto ImP e along KerP e. 0

65

The projection IT appearing in Theorem 3.2 maps V(A)

into itself. This fact is interesting in its own right and will

also be used in the next section. Therefore we formally state

it as a proposition.

PROPOSITION 3.3. Suppose e = (A,B,C) and e x

(Ax,B,-C) ~ realization triples, and assume in addition

that X = ImP e $ Kerp~. Here X ~ the (common) state space of e and eX. Let IT ~ the projection..£i. X onto ImP e along Kerp~.

Then IT ~ the domain of A into itself.

PROOF. With the notation of Theorem 2.1, we have IT x -1 x x -1

I - (J) P. So it suffices to show that (J) maps

V(A)nImp x into V(A)nImP.

According to Theorem 2.1 in Ch.I of [BGK3] and Theorem

2.1 in the present paper, the inverse of JX is given by

(Jx)-l = J - R(I-K)-I U• Take x € V(A)nImp x • Then

Ux = QAex € v7(R)nL~[O,oo), where Q is the projection of m m m

L1(R) onto L1 [O,oo) along L1 (-oo,O]. From [GK] we know that

(I-K)-1 is the product of two operators of the type

Q(I+F)IL~[O,oo), where F is a convolution integral operator on

L~(IR) ~.ith an L1-kernel (see also [BGK1], Section 4.5). Hence

Page 76: Constructive Methods of Wiener-Hopf Factorization

66 Bart, Gohberg and Kaashoek

(I-K)-1 maps V~(R)nL~[O,~) into itself. Note in this context mxm m m that if k € Ll (R) and f € V 1(R), then k*f € V1(R) too and

(k*f)' = k*f' + [f+(O)-f_(O)]k. Thus (I-K)-1 Ux € V~(R) and

Proposition 2.1 in Ch.I tells us that we end up in V(A) by

applying reo We conclude that R(I-K)-I U maps V(A)nImp; into

~A)nImPe. The same is true for J and (consequently) for

(JX)-I. 0

11.4. Canonical Wiener-Hopf factorization

We begin by recalling the definition of canonical

Wiener-Hopf factorization. Let

(4.1) W(A) I - k(A),

mXm . where k € Ll (R). So W(A) belongs to the (mxm matr1x) Wiener

algebra with respect to the real line. A factorization

(4.2) W(A)

will be called a canonical Wiener-Hopf factorization if the

following conditions are satisfied:

(a)

(b)

mxm mxm there exist k_ ; Ll (-~,O] and k+ : Ll [O,~) such

that W_(A) = I-k_(A) and W+(A) = I-k+(A),

det W_(A) does not vanish on the closed lower half

plane ImA ~ 0, and det W+(A) does not vanish on the

closed upper half plane ImA ~ O.

If (4.2) is a canonical Wiener-Hopf factorization, then W_(A)

is continuous on the closed lower half plane, analytic in the

open lower half plane and

lim W_(A) I. A+~

ImA~O

Analogous observations can be made for W+(A), with the

understanding that the lower half plane has to be replaced by

Page 77: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip

the upper half plane. By Wiener's theorem, there exist x mxm x mxm -1

k_ E L1 (-oo,Oj and k+ E L1 [0,00) such that W_O) = ~x -1 ~x

I - k_(A) and W+(A) = I - k+(A). Also canonical Wiener-Hopf

factorizations as introduced above are unique (provided of

course that they exist).

In this section we discuss canonical Wiener-Hopf

factorization of mxm matrix functions W(A) of the form (4.1) mxm

under the additional assumption that k E L1 (~) for some , w negative constant w. In particular W(A) is analytic in a strip

around the real axis (but not necessarily at 00). Recall from mxm .

Section 1.4 that L1 (R) coincldes with the class of all ,w kernels of realization triples of exponential type w.

THEOREM 4.1. Consider the function W(A) = I - k(A),

where k = ke ~ the kernel associated with the realization

triple e = (A,B,C), ~.,

W(A) = I + C(A-A)-1 B•

Then W(A) admits a canonical Wiener-Hopf factorization l!. and

.£E..!Ll!.

( 1 ) x e (AX,B,-C) is a realization triple,

( 2) x

67

x Here X ~ the (common) state space.£!. e and e • Suppose (1) and

(2) ~ satisfied, and let II ~ the projection.£!. X onto x

KerP e along ImP e • Then the canonical Wiener-Hopf factorization

.£!. Wo.) has the form (4.2) with

(4.3) W (A) I + Co.-A)-1(I-II)B,

(4.4) W+o.) I + CIIo.-A)-1 B,

( 4 • 5) W (A) -1 I - C(I-II)( A-A x)-1 B, -

(4.6) W+(A)-1 I - C(A-A x )-1 IIB •

Page 78: Constructive Methods of Wiener-Hopf Factorization

68 Bart, Gohberg and Kaashoek

As we have seen at the end of the previous section,

the projection n maps V(A) into V(A) C V(C). It follows that

the right hand sides of (4.4) aDd (4.5) are well-defined for

the appropriate values of A. This fact is obvious for the right

hand sides of (4.3) and (4.6). Without proof we note that the

operator cn featuring in (4.4) is A-bounded. Analogously x

C(I-n) appearing in (4.5) is A -bounded.

PROOF. Suppose (1) and (2) are satisfied, and let n x

be the projection of X onto KerPa along ImP a• Then n maps x

V(A) ~ V(A ) into itself. Hence, along with

(4.7) x

X = ImPa • KerP a,

we have the decomposition

(4.8) x

V(A) = [lXA) n ImP a) • [V(A) n KerP a).

This enables us to apply a generalized version (involving

unbounded operators) of the Factorization Principle introduced

and used in [BGKVD) and [BGK1). In fact, the proof of the

second part of the theorem is a straightforward modification of

the proof of the first part of Theorem 1.5 in [BGK1]. The

details are as follows.

With respect to the decomposition (4.7) and (4.8),we

write

:: ]. B - [ ::]. c - (c 1 C 2 ) •

Put a1 = (A 1 ,B 1 ,C 1 ). Then a1 is a realization triple. This is m clear from the fact that A1(ImP a + ImP e) and C1 (ImP e + ~ ) are

the restrictions of A and C to V(A)nImP e, respectively.

Here ~m is of course the (common) input/output space x

of a and a • Note that iAl is the infinitesimal generator of a

Co-semigroup of negative exponential type. Hence the kernel k1

associated with a1 has its support in (-~,Ol and

Page 79: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip

is defined and analytic on an open half plane of the type

ImA < -w with w strictly negative. x x

Next we consider e1 = (A 1 ,B 1 ,-C 1 ), the inverse of e1 • x x I x As in Sections 11.2 and 11.3, let J = Pe ImP e: ImP e + ImP e.

x x Then J is invertible and maps V(A)nImP e onto V(A)nImP e. It is

easy to check that JX provides a similarity between the x x x x

operator Al and the restriction of A to V(A )nImP e• Hence x

iAl is the infinitesimal generator of a Co-semigroup of

69

negative exponential type. But then Theorem 5.1 in Ch.I x

guarantees that e 1 is a realization triple. Further, the kernel x x

kl associated with e 1 has its support in (-oo,OJ and

for all A with ImA < -w. Here it is assumed that the negative

constant w has been taken sufficiently close to zero. x x

Put 9 2 = (A 2 ,B 2 ,C 2) and e2 = (A 2 ,B 2 ,-C 2), where x

A2 A2-B 2 C2 • So 9 2 and 9; are each others inverse. Obviously x e2 is a realization triple, and a similarity argument of the

type presented above yields that the same is true for e2 • The

operators -iA 2 and -iA; are infinitesimal generators of CO-

semigroups of negative exponential type. Hence the kernels kZ x. x . and k2 assocLated with e2 and e2 , respectively, have theLr

support in [0,00). Finally, taking Iwl smaller if necessary, we

have that

and

are defined and analytic on ImA > -00.

We may assume that both e and eX are of exponential

type oo. For IImAI < -00, one then has

Page 80: Constructive Methods of Wiener-Hopf Factorization

70 Bart, Gohberg and Kaashoek

Wo.) -1 -1

I + C10-A1)B1 + C20-A2) B2

-1 -1 + C10-A1) AOO-A2) B2 •

x Now KerP9 is an invariant subspace for

and so AO = B1C2 • Substituting this in the above expression

for W(~), we get W(~) = W1(~)W2(~). Clearly this is a canonical

Wiener-Hopf factorization. One verifies without difficulty that

W10) = W_o.) and W20) = W+O), where W_O) and W+O) are as

defined in the theorem.

This settles the second part of the theorem. In order

to establish the first, we recall from [GKj that W(~) admits a

canonical Wiener-Hopf factorization (if and) only if I-K is

invertible, where K is as in Sections 11.2 and 11.3. The

desired result is now clear from Theorem 3.2 above. D

Let us return to the second part of Theorem 4.1 and

its proof. Adopting (or rather extending) the terminology and

notation of Section 1.1 in [BGK1j, we say that 9 1 is the

projection..£!.. 9 associated with I-rr and write 9 1 = pr I _rr(9). x x x x

Similarly, 9 2 = prrr(9), 9 1 = pr I _rr(9 ) and 9 2 = prrr(9 ). What

we have got then is a canonical Wiener-Hopf factorization

wO) = W_o.)W+o.) with

W (A) I - k_O), W ( ~)-1 I - k:(~) , W+ (A) k+o.) , w+o.)-1

AX I - I - k1 (A) ,

where k -, k+, k X and k X +

are the ke rne 1 s associated with

pr I _rr(9), x

prrr(9), pr I _rr(9 ) and x

prrr(9 ), respectively. For

sufficiently mxm

til < 0 close to zero, k € L1 tIl(-""O], -mxm k X L mxm( - 00 OJ and k+

'mxm k+ € L1 [0,00), E € L1 [0,00).

,til 1 , til ' , til

Suppose conditions (1) and ( 2) of Theorem 4.1 are

Page 81: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip 71

satisfied, and let K be the Wiener-Hopf integral operator

defined by (3.1) with k = ke. Then I-K is invertible by Theorem

3.2. For the inverse of I-K one can now write

00 (I-K)-l~(t) = ~(t) - J y(t,s)~(s)ds,

o

where (almost everywhere)

x min(t,s) x x y(t,s) = kX(t-s) + k_(t-s) - J k+(t-r)k_(r-s)dr

+ 0

x x and k_,k+ are as above (see [GKj; cf. also [BGKl], Section

4.5).

11.5. The Riemann-Hilbert boundary value problem

In this section we deal with the Riemann-Hilbert

boundary value problem (on the real line)

( 5 • 1 ) -00 < A < 00,

the precise formulation of which reads as follows: Given an

mxm matrix function W(A), -00 < A < 00, with continuous entries,

describe all pairs ~+, ~ of il:m-valued functions such that -( 5 • 1) is satisfied while, in addition, ~+ and ~ are the -Fourier transforms of integrable il:m-valued functions with

support in [0,00) and (-oo,Oj, respectively. For such a pair of

functions, we have that ~+ (resp. ~_) is continuous on the

closed upper (resp. lower) half plane, analytic in the open

upper (resp. lower) half plane and vanishes at infinity.

The functions W(A) that we shall deal with are of the

type considered in the previous section. So W(A) is of the form

(4.1) with k € Lmxm(R) for some negative constant w. In 1, w

particular W(A) is analytic in a strip around the real axis.

THEOREM 5.1. Consider the function W(A) = I - k(A),

where k = kg ~ the kernel associated with the realization

t rip 1 e e = (A, B , C), ~,

W(A) = I + C(A-A)-l B•

Page 82: Constructive Methods of Wiener-Hopf Factorization

72 Bart, Gohberg ana Kaashoek

x x Assume e = (A ,B,-C) ~~ realization triple .!.£.£.. (~ equivalently, det W( A) '" 0 for all A e: ~). Then the ~.2!..

functions ~+, ~_ ~ ~ solution .2!.. the Riemann-Hilbert boundary

value problem (5.1) ..!!. and~..!!. there exists a (unique) x

vector x..!.!!.. 1m Pe n Ker P e such~

ao x -1 f iAt x ..

~+(A) = C(A-A) x = e Aex(t)dt,

-1 C(A-A) x

o

o Ut - f e Aex(t)dt. - ao

PROOF. The proof is analogous to that of Theorem 5.1

in [.BGK5]. For the if part, use formula (3.3) in Ch.I; for the

only if part employ Theorem 3.1. 0

REFERENCES

[BGKl] Bart. H., Gohberg, I. and Kaashoek, M.A.: Minimal

factorization.2!.. matrix and operator functions.

Operator Theory: Advances and Applications, Vol. 1,

Birkhauser Verlag, Basel etc. 1979.

[BGK2] Bart, H., Gohberg, I. and Kaashoek, M.A.: Wiener-Hopf

integral equations, Toeptitz matrices and linear

systems. In: Toeplitz Centennial, Operator Theory:

Advances and Applications, Vol. 4 (Ed. I. Gohberg),

Birkhauser Verlag, Basel etc., 1982, pp. 85-135.

[BGK3] Bart, H., Gohberg, 1. and Kaashoek, M.A.: The coupling

method for solving integral equations. In: Topics in

Operator Theory, Systems and Network, The Rehovot

Workshop, Operator Theory: Advances and Applications,

Vol. 12 (Eds. H. Dym, 1. Gohberg), Birkhauser Verlag,

Basel etc., 1984, pp. 39-73.

[BGK4] Bart, H.; Gohberg, 1. and Kaashoek, M.A.: Exponentially

dichotomous operators and inverse Fourier transforms.

Report 8511/M, Econometric Institute, Erasmus

University Rotterdam, The Netherlands, 1985.

[BGK5 ] Bart, H., Gohberg, 1. and Kaashoek, M.A.: Fredholm

theory .2!.. Wiener-Hopf equations in terms of realization

Page 83: Constructive Methods of Wiener-Hopf Factorization

Symbols analytic in a strip

[BGK6]

~ their symbols. Integral Equations and Operator

Theory 8, 590-613 (1985).

Bart, H., Gohberg, I. and Kaashoek, M.A.: Wiener-Hopf

factorization, inverse Fourier transforms and

exponentially dichotomous operators. J. Functional

Analysis, 1986 (to appear).

[BGKVD] Bart, H., Gohberg, 1., Kaashoek, M.A. and Van Dooren,

P.: Factorizations of transfer functions. SIAM J.

Control Optim. 18, 675-696 (1980).

[BK] Bart, H. and Kroon, L.G., An indicator for Wiener-Hopf

integral equations with invertible analytic symbol.

Integral Equations and Operator Theory 6, 1-20 (1983).

See also the addendum to this paper: Integral Equations

and Operator Theory 6, 903-904 (1983).

[CG] Curtain, R.F. and Glover, K: Balanced realisations for

infinite dimensional systems. In: Operator Theory and

Systems, Proceedings Workshop Amsterdam, June 1985,

Operator Theory: Advances and Applications, Vo1.19(Eds.

H. Bart, I. Gohberg, M.A. Kaashoek), Birkhauser Verlag,

Basel etc., 1986, pp. 86-103.

[F]

[GRS]

Fuhrmann, P.A.: Linear systems and operator theory in

Hilbert space. McGraw-Hill, New York, 1981.

Gelfand, I., Raikov, D. and Shilov, G.: Commutative

normed rings. Chelsea Publishing Company, Bronx, New

York, 1964.

[GK] Gohberg, 1. and Krein, M.G.: Systems ~ integral

equations ~.!!.. half line ~ kernels depending ~ the

difference ~ argument. Uspehi Mat. Nauk 13, no.2 (80),

3-72 (1958). Translated as: Amer. Math. Soc. Trans. (2)

14, 217-287 (1960).

[HP] Hille, E. and Phillips, R.S.: Functional analysis, and

semigroups. Amer. Math. Soc., Providence R.I., 1957.

[Ka] Kailath, T.: Linear systems. Prentice Hall Inc.,

Englewood Cliffs N.J., 1980.

[K] Kalman, R.E.: Mathematical description of linear

dynamical systems., SIAM J. Control 1, 152-192 (1963).

[KFA] Kalman, R.E., Falb, P. and Arbib, M.A.: Topics in

Page 84: Constructive Methods of Wiener-Hopf Factorization

74 Bart, Gohberg and Kaashoek

mathematical system theory. McGraw-Hill. New York etc.,

1960.

[pj Pazy, A.: Semigroups ~ linear operators and

applications ~ partial differential equations. Applied

Mathematical Sciences, Vol. 44, Springer-Verlag, New

York, etc. 1983.

H. Bart I. Gohberg

Econometric Institute

Erasmus Universiteit

Postbus 1738

3000 DR Rotterdam

The Netherlands

M.A. Kaashoek

Subfaculteit Wiskunde en

Informatica

Vrije Universiteit

Postbus 7161

1007 Me Amsterdam

The Netherlands

Dept. of Mathematical Sciences

The Raymond and Beverly Sackler

Faculty of Exact Sciences

Tel-Aviv University

Ramat-Aviv. Israel

Page 85: Constructive Methods of Wiener-Hopf Factorization

Operator Theory: Advances and Applications, Vol. 21 © 1986 Birkhauser Verlag Basel

ON TOEPLITZ AND WIENER-HOPF OPERATORS WITH CONTOURWISE RATIONAL MATRIX AND OPERATOR SYMBOLS

*) I. Gohberg, M.A. Kaashoek, L.Lerer, L. Rodman

VecUco.ted to :the memoJr.Y 06 Va.v,td Milma.11.

75

Explicit formulas for the (generalized) inverse and criteria of invertibility are given for block Toeplitz and Wiener-Hopf type operators. We consider operators with symbols defined on a curve composed of several non-intersecting simple closed contours. Also criteria and explicit formulas for canonical factorization of matrix functions relative to a compound contour are presented. The matrix functions we work with are rational on each of the compounding contours but the rational expressions may vary from contour to contour. We use realizations for each of the rational expressions and the final results are stated in terms of inver­tibility properties of a certain finite matrix called indicator, which is built from the realizations. The analysis does not depend on finite dimensionality and is carried out for operator valued symbols.

TABLE OF CONTENTS O. I NTRODUCTI ON l. INDICATOR 2. TOEPLITZ OPERATORS ON COi~POUNDED CONTOURS 3. PROOF OF THE MAIN THEOREM 4. THE BARRIER PROBLEM 5. CANONICAL FACTORIZATION 6. UNBOUNDED DOMAINS 7. THE PAIR EQUATION 8. WIENER-HOPF EQUATION WITH TWO KERNELS 9. THE DISCRETE CASE

REFERENCES

*)The work of this author partially supported by the Fund for Basic Research administrated by the Israel Academy for Sciences and Humanities.

Page 86: Constructive Methods of Wiener-Hopf Factorization

76 Gohberg, Kaashoek, Lerer and Rodman

INTRODUCTION The main part of this paper concerns Toeplitz operators of which

the symbol W is an m x m matrix function defined on a disconnected curve r. The curve r is assumed to be the union of s + 1 nonintersecting simple smooth closed contours rOo rl •...• rs which form the positively oriented boundary of a finitely connected bounded domain in t. Our main requirement on the symbol W is that on each contour rj the function W is the restriction of a rational matrix function Wj which does not have poles and zeros on rj and at infinity. Using the realization theorem from system theory (see. e.g .• [1]. Chapter 2) the rational matrix function Wj (which differs from contour to contour) may be written in the form

(0.1) W . (A) = I + C. (A - A. f 1 B. J J J J

A E r· J

where Aj is a square matrix of size nj x nj • say. Bj and Cj are matrices of sizes nJ. x m and m x n .• respectively. and the matrices A.

x J J and Aj = Aj - BjCj have no eigenvalues on r j . (In (0.1) the functions Wj are normalized to I at infinity.) Our aim is to get the inverse of the Toeplitz operator with symbol Wand the canonical factorization of W in terms of the matrices A .• B. C. (j = 0.1, .... s). When the rational

J J, J matrix function Wj do not depend on j. our results contain those of [lJ. Sections 1.2 and 4.4 and of [3J. Section 111.2. The case when WOo Wl •···• Ws are (possibly different) matrix polynomials has been treated in [9].

In achieving our goal an important role is played by the (left) s

indicator. an (s+l) x (s+l) block matrix S = [S'k]' k=O of which the 1 J. x

entries are defined as follows. For j = O.l, ...• s let Pj (resp., Pj ) be the Riesz projection corresponding to the eigenvalues of Aj (resp .• Aj) in the outer domain of r. and for j.k = O.l •... ,s define

x J . Sjk : 1m Pk + 1m Pj by settlng

S .. x = P.x JJ J

1 J ( )-1 ( x)-l x S'kx = - ---2' A-A. B,C k A-Ak x dA (xEIm Pk • j , k). J 'IT 1 r. J J

J x ( x)-l Note that for x E 1m Pj the term A-Aj makes sense for each A in

the inner domain of rk• and hence Sjk is well-defined.

Page 87: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 77

The Toeplitz operator we study is defined in the following way. Let B be the Banach space of all ~m-valued functions on r that are H~lder continuous with a fixed exponent a f (0.1). and let Pr = ~(I + Sr)' where Sr: B ~ B is the operator of singular integration on r. The Toeplitz operator T with symbol W on r is now defined on B+ = Pr(B) by setting

The following is a sample of our main results. We prove that the Toeplitz operator T with symbol W is invertible if and only if the indicator S corresponding to W has a nonzero determinant and we express the inverse of T in terms of S-l and the matrices Aj • Bj • Cj (j = O. l •...• s). This result is a by-product of a more general theorem which gives explicit formulas for a generalized inverse of T and for the kernel and image of T. Also through the indicator we obtain criteria and explicit formulas for canonical factorization of the symbol. Our results about the canonical factorization are also valid for unbounded domains. which allows us to obtain inversion formulas for Wiener-Hopf pair equations and equations with two kernels.

This paper consists of nine sections. In Section 1 we introduce the left indicator and also its dual version. The main results on Toeplitz operators are stated in Section 2 and proved in Section 3. In the fourth section we give applications to the barrier problem. Canonical factoriza­tion is treated in Section 5 for bounded domains and in Section 6 for unbounded domains. Sections 7 and 8 contain the applications to operators of Wiener-Hopf type. The discrete case is discussed in the last section.

We remark that our analysis does not depend on finite dimension­ality and is carried out for operator-valued symbols. So in the represen­tation (0.1) we allow the operators Aj • Bj and Cj to act between infinite dimensional spaces.

In this paper the following notation is used. Given Banach space operators Ki : Xl + Xi' i = l •...• m. we denote by diag[Ki]~=l or Kl e ... e~ the operator K acting on the direct sum Xl e X2 ~ ... e Xm as follows: Kx = Kix. x E Xi' If Ki : Xi + Y. i = 1 •.•. ,m are operators. the notation row [Ki]~=l stands

Page 88: Constructive Methods of Wiener-Hopf Factorization

78 Gohberg, Kaashoek, Lerer and Rodman

for the naturally defined operator [K1K2 ... ~J: xle ... eXm ~ Y. Anal­ogously. for operators Ki : Y ~ Xi we denote

L~

The notation col[xiJ~=l is also used to designate vectors x € Xle ... eXm whose i-th coordinate is xi' i = l, ...• m. The identity operator on the Banach space X is denoted IX'

1. Indicator Let rO •. · .• rs be a system of simple closed rectifiable non­

intersecting smooth contours in the complex plane which form the positively oriented boundary r of a finitely connected bounded open set n+.

Denote by nO.n1.·· .• r.~ the outer domains of the curves rO' rl •··· ,rs respectively. We assume that ni, ... ,n~ are in the inner domain of rO

- - s-and nO is unbounded. Put n = Un.. The notation L(Y) is used for j=O J

the Banach algebra of all bounded linear operators acting on a (complex) Banach space Y.

Consider an L(Y)-valued function W(A) on r which on each rj(j=o, ...• s) admits the representation

(1.1) W(A) = W'(A) = I + C'(A - A.)-lB., A € rJ .• J J J J

where Cj:Xj+Y. Aj:Xj~Xj' Bj : y'~ Xj are (bounded linear)operators and Xj is a Banach space.

The representation of the form (1.1) is called a realization of Wj . An important particular case appears when dimY < ~ and the functions WOo W1' .. ·• Ws are rational with value I at infinity. It is well-known that for such functions realizations (1.1) exist always with finite dimensional Xj , j = O.l, ...• s. More generally (see Chapter 2 in [lJ), a realization (1.1) exists if and only if the L(Y)-valued function Wj is analytic in a neighborhood of rj .

Page 89: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 79

Given X(A) realized as in (1.1). assume that the operators Aj and Aj d;f Aj - BjCj have no spectrum on rj . Note that this implies

(two-sided bounded) invertibilityof W(A) for each A E r. In fact. there is a realization

Conversely. if dimY < 00 and the rational matrix function Wi(A) together with its inverse Wi(A)-l has no poles on ri (i = O •...• s), then one can take the realization (1.1) for WO"",Ws to be minimal (see. e.g., [1]. Chapter 2) which ensures that a(Aj)nrj = a(Aj)nrj = ~ • j = O, ...• s.

\Je introduce now the notion of indicator for the realizations (1.1) of the function W(A). By definition. Pj is the spectral projec­tion of Aj corresponding to the part of a(Aj ) in the domain nj (j = O, ... ,s). Similarly. P~ is the spectral projection of A~

J x _ J corresponding to the part of a(Aj ) in nj . Introduce the spaces

s x s x Z = Ql 1m PJ. • Z = 6) 1m P ..

j=O j=O J

Observe that in these direct sum representations of the spaces Z and ZX

some of the summands 1m Pj and 1m Pj may be zeros. However, this does not affect the formalism that follows.

where Consider the operator matrix R = [R .. ]~ . 0 Z + ZX ,

1 J 1.J =

(x E 1m P j)

( X -1 ( -1 A- Ak) Bk Cj A-Aj ) x d A

(x E 1m P j • k "f j).

Note that for x E 1m Pj the term (A- Aj)-lx is well-defined for each A l. nj. Since for k"f j the curve rk is outside nj. it is clear that for x E 1m Pj the function

( ( X -1 x ( )-1 ~ A) = A-Ak) (I-Pk)Bk Cj A- Aj x

is analytic in Qk (and has a zero of order ~2 at 00 if k = 0), and thus 1m Rkj C 1m P~ for k"f j. We shall refer to R as the

Page 90: Constructive Methods of Wiener-Hopf Factorization

80 Gohherg, Kaashoek, Lerer and Rodman

right indicator of the function W{A) relative to the realization (1.1) (and to the multiple contour r). We emphasize that the right indicator depends not only on W{A) and r but also on the realizations (1.1) for W(A)' If (Aj , Bj' Cj ) = (A,B,C) for each j, then Rkj x = P~x for x E 1m Pj and R is the usual indicator (see [3J).

The notion of left indicator is introduced analogously. Namely, the left indicator of W{A) relative to realizations (1.1) is the

operator S = [SijJ~,j=O : ZX + z,

where

S .. x = P. x (x E 1m P~) ; JJ J J

-1 J ( -1 ( x -1 x S'k x = - A-AJ.) BJ,Ck A-Ak) xdA (x E 1m Pk ' j t k). J 21Ti r.

J Again, one checks easily that Sjk maps ImP~ into 1m Pj , so S is we ll-defi ned.

We shall see in section 5 that invertibi1ity of the right (left) indicator is equivalent to existence of right (left) canonical factorization of W(A)' Here we mention only that the indicators satisfy certain Lyapunov equations. In fact,

(1.2) SMx - MS = - BCx ,

where

(1 .3) M = diag [A;!Im PiJ~=O Z + Z,

(1. 4) x . [x! xJs M = dlag Ai 1m Pi i=O ZX + ZX,

(1. 5) CX = row[C i 11m P~J~=O ZX + y

(1. 6) B = col[PiBiJ~=O y + Z.

The equality (1.2) is easily verified; indeed, one has

(ref. Section 1.4 in [5J). Analogously,

(1.7) RM - MXR = _BxC ,

Page 91: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols

where

(l.B) z + Y ,

and

(1 9) Bx l[ x JS. x. . = - co Pi Bi i=O . Y + Z

2. Toeplitz operators on compounded countours In this section we shall introduce and study Toeplitz operators

whose symbol is an operator valued function W(A) admiting representation (1.1).

81

Let B be an admissible Banach space of V-valued functions (e.g., B = HO(r,Y), the Banach space of all functions from r into Y that are Holder continuous with a fixed Holder exponent 0, 0 < a < 1). Define

Pr:B + B,(Pr~)(A)=t(~(A) + !i f ~~~) de) ,A E r , r

where the integral is understood in the Cauchy principal value sense. Then Pr is a (bounded) projection, and we have the direct sum decomposition

+ -. + - +. B = B ~ B wlth B = 1m Pr , B = Ker Pro Here B conslsts of all functions from B which admit analytic extensions on Q+, while B­consists of all functions from B which admit analytic extensions on Q­

and take value 0 at infinity. Let 1<\3 be the operator of multiplication by W on B, that is,

and write MW as a 2 x 2 operator matrix with respect to the direct sum decomposition B = B+ e B-:

MW=[T M12]: B+eB-+B+eB-'

M21 M22

We shall refer to T as the Toe~ o~enato~ with ~ymbol W. For the 2 x 2 operator matrix representing the operator of multiplication by W(A)-l we shall use notation

M [TX M~21 B+ ~ B- + B+ e B-W- l = M~l M~2_

Page 92: Constructive Methods of Wiener-Hopf Factorization

82 Gohberg, Kaashoek, Lerer and Rodman

so TX is the Toeplitz operator with symbol W- l . THEOREM 2.1. Let T b~ a To~ptLtz openato~ who~~ Jymboi W(A)

~ th~ ~~~M~n;t;a.,Uon (1.1 l and let S b~ th~ l~6:t .uu:uc.a.to~ 06 W(A) {wUh ~MP~c.:t:to the ~e.a..Uza.:Uo~ (1.1) and c.ontoUlt rl. Put Z = .~ 1m PJ. J=O

x s X +- +-and Z = Gl 1m P.. Thw th~ opMatoM T Gl I _ ~ IZ: B GlB ~Z -+ B GlB GlZ j=O J B

x + - + ~ - . _0 and S e I + GI I _ Z e B Gl B -+ Z Gl B wB ~~ ~q~v~en:t . • AD B B

~~C-<AfCA.-Y,

Mo~e

wheJr.e

r 1'4~ 2 M21 UX TX x M12M22

'M~2 M21 Ux x x ZxGlB+eB- -+ B+eB-GlZ Q1 M21 M22M22

l s 0 N

and

x -M12M21 U nX x -M12M22M22

Q2 = x M21 U x M21 M~21422 ZGlB+GlB- + B+GlB-eZ

IZ 0 N

~e. .i..n.ve.Jr.tible. OpMatOlt.h wUh i.nv~M gi.v~n by

r -NxM21 x

-N M22 R

Qi l = T M12 0

x M22M21

x M22M22

x -M21 U

and

Page 93: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 83

NM~l x -NM22M22 5R

-1 I + M12 0 Q2 B x -M21

x M22M22

x -M21 U

HeJ!.e. R JJ:, .the. JUg h.t btcUc.a:tOIt 06 W (A), a..nd

, N~ = col (~I ('-AJ.)-lp.B'~(')d,)~ 0 ,~E B-; 'IT r. JJ J=

J s 1 , U(col(xJ')J~=O) = I C.(A-A.)- P.x .•

j=O J J J J

x. E 1m p. J J

j = O ••••• s

NX B- ~ ZX • NX~ C01(~ I (,_A~)-lp~ B.~(,)d,)~ 0 ' ~ E B-'lTl r. J J J J=

J

s x -1 x I C.(A-A.) P. x .• j=O J J J J

X. E 1m p~ . 0 s J J' J = , ••. ,

Theorem 2.1 will allow us to write down explicitly the connections between the kernels and the images of T and 5 as well as express the generalized inverses of T and S in terms of each other. Recall that an operator V is said to have a generalized inverse VI whenever

I V = VV V. THEOREM 2.2. Let T be. a Toe.pUtz opeJta:tOIt who.6e. -6ymbol W(A)

adm.i.:t.6 .the. lteplte.-6e.l'I.ta..Uon [1.1 J, and let S be .the leM .{.ncUc.a;tolt 06 W ( A) [wUh 1te.-6 pe.ct .to .the Jte.aLi.za;t-i.o 11-6 ! 1. 1) J •

Then

(2.1) Ker T = {~:r~ Y I ~(A) ,; uXxd~f_I C'(A-A~)-l p~ x., j=O J J J J

col (xj)j=o E Ker 5}

+ and 1m T C.OI1-6JJ:,u 06 aU. 6unc.tiol1-6 ~ € B .6u.c.h.tha:t

Page 94: Constructive Methods of Wiener-Hopf Factorization

84 Gohberg, Kaashoek, Lerer and Rodman

1 I x ( x )-1 () s (2.2) - 2wi PjPj A-Aj Bj~ A dA)j=O € Im S. rj

FWLtheJr., in sl ,u. a. ge.ne.Ir.£t.U.ze.d inveMe. on S, then

(2.3) Tl = TX + UX[R(1 - SSI) + SI] A

,u. a. ge.ne.Ir.£t.U.ze.d inveMe on T. In (2.3) R ,u. the !Ught -i.nd-i.c.a.:toJL On W(A) and the. opeNltoJU> uX and A Me. a.h -i.n (2 • .1) and (2.2).

The proofs of Theorems 2.1 and 2.2 will be given in the next section.

3. Proof of the main theorems. The proof will be given in several steps. Firstly, we shall

establish a general equivalence theorem. Consider the following spaces and (linear bounded) operators:

8 = 8, e 82 x ~ x 8 = 1 Gl B2

Z = Zl t) Z2 ZX = Z~ e z~

U : z, -+ ~ x x U : z, -+ 82

p = [N Gl 8, 81 82 -+ Z, 81 Z2 Hj La

[NX Gx1 8X tI ~ -+ ZX tI ZX pX = 0 HX J 1 2 1. 2

D = [ D11 0,21 . 8, tI 82 -+ ~ t) ~

-UN °22j

OX = [ O~l _UxNx

0~21· x . D22J

~ tI 8~ -+ 81 t) 82

Page 95: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 85

[R-GXU K12] K = x

K22 L-H U

[RX_GIlX x K'2

KX = x x

L- HU K22

THEOREM 3.' M.6ume

(i) pXO=KP, UX R - OX U - - 22 x • UR = -0 22 UX

(i i) 0 M. .iI1V €Aub.e.e 0-' = OX

(i i i) K .i.6 .iI1v vr.:t<.b.f.e • K-' = KX

Then :the OpeJta.toM 0" Gl I x a.l1d R Gl I x Me. eqtUva..f.ent.. MOJte pJr.eweJ.y: Z, 8, R

:~1 (3. , ) E

0

whe.n.e :the 0 pe.n.a.:tOM

E ~ [012UX

I x l Z,

F ~ [0: 2U

-, E =

I x B,

[:11 :.1 F

L Z,J

x O"D" z~ Gl ~ -* ~ 6l z~ ,

NX

J

of 11 z, 6l ~ -* 8, 6l ZX , NX J

F-' [-N = 0"

Page 96: Constructive Methods of Wiener-Hopf Factorization

86

(3.2a)

(3.2b)

(3.3a)

(3.3b)

(3.4a)

(3.4b)

(3.5a)

(3.5b)

Gohberg, Kaashoek, Lerer and Rodman

Proof. We shall derive the following identities:

x 0"0,,

x 0"0,,

x x R R - N 012 U = IZ

1

R RX - NX ° UX = I 12 ZX 1

Since our hypotheses are symmetric with respect to the upper index x, it suffices to prove the first identity from each pair.

From the second identity in (i) we know that UX R = -0~2 U. Thus

(3.6) Dll 0~2 u - 012 UX R = (011 0f2 + 012 0~2)U .

x x x () Now observe that 011 012 + 012 022 = 0 , because ° ° = I. Thus 3.2a holds. The identity (3.3a) also follows from ° OX = I. To prove (3.4a) equate the left upper entries on both sides of the equality pMx = KXpx. Equating the right upper entries of this equality gives

So x x

R R - N 012 U =

= RX R + G 0~2 U - (Rx - G UX)Gx U - K~2Hx U

= (Rx _ G Ux) (R - GX U) + Kf2(-HX U) +

x x + G 022 U + G U R.

Page 97: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 87

x x x x x Now use D22 U = -U Rand K K = I. We obtain R R - N D12 U = = I Z ' and (3.5a) is proved.

1 From (3.2a) it is clear that formula (3.1) holds true.

Formula (3.3a) allows us to write

x I x I x NX

°12U E = Bl Zl

I x 0 0 I Zl BX

1

This implies that E is invertibl e, and

_Nx I + NX 0 ux1 E- 1 = Z~ 12 I

I x -012 UX J Bl

Using (3.5b) we see that E- l has the desired form. By computing the matrix products F 0 F- l and F- l 0 F one easily sees (using the formulas (3.2) - (3.5)) that F is invertible with inverse of the desired form. D

Observe that equalities (3.2) - (3.5) amount to

(3.7) [011 -012UX j-l [O~l 0~R2 uj -N RX cNx

In the terminology of [3J the operators 011 and Rare matniciatty coupled. Once (3.7) is established, the equality (3.1) together with

+ 1 + 1 the formulas for E- and F- follow also from a general result on matricial1y coupled operators (Theorem 1.1.1 in [3J).

Further, we shall prove three lemmas. In the rest of this section we use the notations introduced in Section 2. The function W(A)

is assumed to admit the realizations (1.1). LEMMA 3.2. Fan ~_€ B- and A E r we have

(3.8) _1_. f W(T)-W(A) ~ (T)dT = (-U.N.~ ) (A) , j = 0, ... ,s, 2111 T- A - J J -r.

J where

Page 98: Constructive Methods of Wiener-Hopf Factorization

88 Gohberg, Kaashoek, Lerer and Rodman

(3.9)

Proof. Take A E rk. First assume k 1 j. Since A is out­side rj' we know that (._A)-l~_(.) has an analytic continuation in nj (and a zero of order ~ 2 at 00 for j = 0). Hence

( 3. 1 0 ) 1 r ~-' or) 27iT) ---;:;:- d, = 0

rj

Also C.(· - A.)-l(I - P.)B.~ (.) has an analytic continuation in n~ J J J J - J

(and a zero of order>. 2 at 00 for j = 0)· It follows that

_1_. f W(,) - WP) ~ (,)d, = 2111 r. ,- X -

J -1 1 C.(T-A.) P.B.

= _. J J.. J J J ~ (,)dT = 2111 T - A -r. 1

J ( 1 (A-A')('-A.)- P.B. \ = C. (A-A.) -1 P. -. f J J J J ~ (,) dT) =

J J J 2111 r. T - A -J

= C.(A-A.)-l p. (- -21. J ('_A.)-lp.B.~ (,)d,) + J J J 111 r. J J J -

J

+ CJ.(A-A.)-l p. B. (-21. J ~j-r) d,) . J J J 111 T - A r.

J

Note that the last term is zero because of (3.10). So we have proved (3.8) for A E Pk ' k 1 j.

Next, take A E Pj . Using the resolvent equation we see that

1 J W(>,,)-W(A) ()d-211i , - >.. ~_, T-

r. J

= C.(>.._A.(l (- _1_. J h-A.)-l B.~ h)d,) . J J 2111 r. J J -

J

Page 99: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols

Since (. - Aj)-l(I - Pj)Bj~_(') has an analytic continuation in nj (and a zero of order ~ 2 for j = 0), we may conclude that

- 2~i J (T-Aj)-l(I-Pj)Bj~_(T)dT = 0 , r. J

and thus (3.8) holds. c

lEMMA 3.3. Let 1jI E B have. an a.na.tyUc con:U.nu.a:t<.on ou:t6-ide. nj whic.h -i..4 zeJW at QO whe.n j ., O. The.n

1 f W(T)-W(A) 1jI(T)dT = -(U N 1jI) (A) (v., j),A E r. -z:rrr r T - A '.I '.I

'.I

WheJr.e. U -i..4 a.6 -i..n Lemma 3. 2, and N -i..4 g-i..Ve.YL by the. 60JUnula (3. 9) '.I '.I

wUh (j) Jr.e.pla.c.e.d by (j) E B (.60, N -i..4 COYL.6-ideJr.e.d a.6 an opeJr.atoJc. - 'J

B -+- 1m P. ). J Proof. Take A E rk with k F 'J. Then (. - A)-11jl(.) has an

analytic continuation in n- (and a zero of order ~ 2 at QO for '.I = 0). 'J

It follows that

__ 1 __ . f W(T)-W(A) 1jI(T)dT = 27Tl r T - A

- 1 J - 27Ti r '.I

1 = -2 ·f 7Tlr

'.I

'.I

C (T-A )-lB 'J T _'J A 'J 1jI(T)dT +

T - A

Note that C (. - A )-1 (I - P )B 1jI(.) has an analytic continuation in 'II '.I \1'.1 ..

n~ (and a zero of order ~ 2 at QO for '.I = 0). Since A E rk with k F 'II, it follows that

89

Page 100: Constructive Methods of Wiener-Hopf Factorization

90

1 '" 2lTi J

r v

Gohberg, Kaashoek, Lerer and Rodman

= C (\-A )-lp (- ---21. f (T-A )-lp B w(T)dT + V V v lTl r v v v

v

+ 211'f p B il4 dT) = lTr V\iT-A

v

= - (U N W) (\). v v

Next, take \ € rv Then

Since (o_A)-l (I-P)Bv w(o) has an analytic continuation inside rv (and a zero of order) 2 at 00 for v = 0), we see that

Thus

2~i f (T_A)-l (I-P)Bv 1jJ(T)dT = 0 V

1 f W(T)-W(\) 1jJ(T)dT = 'TTiTr T - A

v

= Cv(\-Av)-l pv(- 2!i{ (T-Av)-l PvBv1jJ(T)dT) v

The lemma is proved. c LEMMA 3.4. Let T be the Toe~z openato~ v~h 4ymbol W(\),

let R a.nd S deywte the }[).ght and leM huUC.a.tOM 06 W, M6 pec..;t{.v ely, a.nd

let U and UX be de6h1.pft. 4I.l Mt Thw~em 201. Then.

(3.11)

Page 101: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols

Proof. Letting Nv and Uv be as in Lemma 3.3. and defining

x x + x (X)-l x x U.:Z -+-B ,U.x=-C.A-A. P.X,xEZ J J J J J

we have

x _ 5 X U S P.x - IUS. P. x =

J v=Q v VJ J

\ X X X = L U N U. P. x + u. P.P. X 4' V v J J J J J

VrJ

Put ~(A) = (uj Pj x) (A) = -Cj(A-Aj)-l Pj x, and observe that ~(A) satisfies the hypothesis of Lemma 3.3. Hence

(U S Pj x) (A) =

= - I _l_·S well-WeAl ~(T)dT + (u. P. p~ x) (A) vtj 2TIl r T - A J J J

v

= - _1_. f W(T)-W(A) ~(T)dT + (U. p. p~ x) (A) + a(A) • 2TIl T - A J J J

where r

a(A) = _1_. J W(T)-W(A) ~(T)dT 2TIl r T-A

j

It is not difficult to verify the formula

2~i J W(~)=W~A) ~(T)dT = [(Pr-I)MwPr + PrMw(I-Pr )] ~(A) r

As ~(A) E B+ = 1m Pro this formula gives

1 I W(T)-W(>") ~(T)dT = [(p - I)M P ] ~(A) , 00 T-A r wr r

and we have

• ~ E B .

91

Page 102: Constructive Methods of Wiener-Hopf Factorization

92 Gohberg, Kaashoek, Lerer and Rodman

To compute a(A) we distinguish two cases. First assume A E r j . Then

a(A) = C.(A-A.)-l (2!' I (T-A.)-l B.C.(T-A~)-l P~x de) = J J 1 r. J J J J J

J

= C.(A-A.)-l (~I (T-A.)-ldT - ---21 . I (T-A~)-ldT).P~ X = J J ~TIl r. J TIl J J

J

= C.(A-A.r l (I-P.)P~ x J J J J

So for A E rj we have

(U. P. P~ x) (A) + a(A) J J J

= C.(A-A.)-l p~ x J J J

Next, take A E rk ' k r j. Then

a(A) = 2rrl l' I W(T)~(T) dT + r. T - A J

+ W(A)(- 2~l' J !l!1!l dT) " r. T-A. J

and using the definition of ~(A) and the fact that ~(A) is analytic outside nj having zero at 00 when

( 1 C.(T-A.)-l) a( A) = - -2' I J ~ dT p~ X

TIl r. T - J J

( -1 x = -c. A-A.) P. x J J J

Thus for A E rk ' k r j , we have

(U. P. p~ x) (A) + a(A) = J J J

j r 0, we obtain

Page 103: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols

= - W{A)1jJ{A)

We see that

x x = - Pr MW Pr Uj Pj x •

which proves the first identity in {3.11}. The second identity is obtained by applying the first identity to W(A}-l in place of W(A}.

The next result shows that the right indicator R and the res­triction of the multiplication operator MW to the subspace B- are equivalent after appropriate extension by identity operators.

THEOREM 3.5. The op~o~ R ~ 1B- and M22 ~ I x a4e equivalent. Mo/te P'Lewety, Z

R

E o o

lAheJI.e E and F a4e ..c..nve.Jl1:,LbR..e op~o~ g..c..ven, togetheJI. wUh theJJt ..c..nv~ e.6, bif the 60UoLlJ.lng 6oJUnu.ta..b:

[M21 UX x

M2t22] E =

NX ZX ~ B- + B- ~ ZX

I x Z

F ' [:lU MY

1 ' 22

Z e B- + B- ~ ZX

NX

[ _Nx RS 1 . -1 B- ~ ZX + ZX ~ B-E =

J . I x B- -M 21 U

[J

Page 104: Constructive Methods of Wiener-Hopf Factorization

Gohberg, Kaashoek, Lerer ana Rodman

Proof. We shall apply Theorem 3.1. Put

s s Zl = Ql 1m P. (=Z) •

j=O J Z2 = .Ql Ker PJ' ,

J=O

x s x Z2 = i Ker P.

j=O J

For j = O, ... ,s define

Pj B -+ X. p.tp = zL f (-r-A.)-l Bjtp( -r)dT J J TIl r. J

J

x B -+ X. x 1 J ( X)-l Bj\P( T )dT p. p.tp = r T-A. , J J J TIl r. J

J

Observe that for each j

(3.12) (I-p .)p.tp = 0 J J-

(I - p~)PJ~tp = 0 (tp E B-). J - -

It follows that p and pX have the following 2 x 2 operator matrix

representa tions:

Page 105: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 95

+ x x + . Also, let U: 21 + Band U 21 + B be as 1n Theorem 2.2. Put 0= MW and OX = MW-l ' the operators of multiplication by W(A) and W(A)-l, respectively. ~ccording to Lemma 3.2 the elements in the left lower corner of the 2x2 operator matrix representation for 0 and OX (with respect to the direct sum decomposition B=B- e B+) are of the form required in Theorem 3.1 (to check this use the equality

Write

Note that

P (W(A)IP (A») = __ 1 __ . f W{,)-W(A) IP (,)d, "n_ € S-) r - 2~1 r ,- A - ~

. [ x ]2 I = Kij i,j=l

s s x e 1m P. + i 1m P. j=O J j=O J

It follows that for k I j we have

(= 0) ,

where Rkj is the (k,j) entry in the right indicator R of W(A). and Uj is defined as in Lemma 3.2.

Further,

1 f x)-l 1 ~ (T-A. B. CJ.(,-A.)- d, = ~~, r. J J J

J

( -1 (x)-l 1 1 3.13) = ~ J ,-A. d, + ---2 . J (,-A.)- d, = ~, r. J ~1 r. J

J J

= pj - Pj ,

and we have also

Page 106: Constructive Methods of Wiener-Hopf Factorization

96 Gohberg, Kaashoek, Lerer and Rodman

Hence x K11 = R - G U. Next we compute

( x) x (x) x I-Pk H U Pj = I-Pk Pk Uj Pj

1 J ( x)-l x) -1 = 2~i r T-Ak (I-Pk Bk Cj (T-Aj ) Pj dT . k

For k f j the functions (0- A~)-l(I-P~)Bk and Cj (o_Aj )-l Pj are both analytic inside rk (and have a zero at 00 when k = 0). It follows that (I_P~)Hx U Pj = 0 for k f j. Next, take k = j. In view of (3.13)

we have (I_Pj)Hx U Pj = -(I-Pj)Pj . So, _Hx U = K21 . Replacing weAl'

by W(A)-l, we obtain analogously that K~l = 5 - GUx, K~l = -H UX, where

5 is the left indicator of WeAl (or, what is the same, the right indicator

of W(A)-l).

We put K = KX = I. Then, clearly, K and KX have the 2 x 2 operator matrix representations are required in Theorem 3.1 (with RX = 5).

Using the equalities

( ) -1 ( )-1 x)-l A-A. B. W. A = (A-A. B. J J J J J

(j = 0, ... ,s)

and the analogous equality involving Wj(A) in place of Wj(A)-l, we obtain

-1 x . x PjMW = Pj PjMW = Pj (j = O,l, .... s).

Hence pX M = Kp, and the first identity in condition (i) of Theorem 3.1 is satisfied. The two other identities in (i) of Theorem 3.1 are covered by Lemma 3.4. The conditions (ii) and (iii) in Theorem 3.1 are also satisfied. Thus we can apply Theorem 3.1 to get the result of Theorem 3.5.0

Now we pass to the proofs of the results presented in Section 2. Proof of Theorem 2.1. Observe that

[:21 r [f M~21 (3.14) M12

M22 M~l M22

and therefore T and x M22 are matricial1y coupled. By Theorem I. 1.1

[3] we have in

Page 107: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols

(3.15) o

1 + B

where El and Fl are invertible operators given by formulas

with the inverses

-1 El = [ -M~l

1 + B

97

Combining the formula (3.15) with Theorem 3.5 (with W(A) replaced by W(A)-l), we obtain Theorem 2.1. a

Proof of Theorem 2.2. First observe that Theorem 3.5 implies

(3.16) = [M22

t-N

Hence Theorem 2.1 in [3J gives

(3.17) Ker M~2 = M21 UxKer S ; Ker S = NXKer M~2

x -1 (x )-1 x (3.18) 1m M22 = N Im S Im S = M21 U Im M22

(3.19) (M~2)I = M22 - M21 UX SI N

(3.20) SI = R - NX(M~2)I M~l U.

Analogously, the relationship (3.14) gives rise to the equalities

(3.21) x x Ker T = M12 Ker M22

x Ker M22 = M21 Ker T

Page 108: Constructive Methods of Wiener-Hopf Factorization

98

(3.22)

(3.23)

(3.24)

Gohberg, Kaashoek, Lerer and Rodman

x -1 1m M22 = M12 1m T

Combining (3.17) - (3.18) with (3.21) - (3.22), one proves that

x x Ker T = M21 M21 U Ker 5 x Ker 5 = N M21 Ker T

1m T = (N MX )-1 1m 5 21 1m 5 = (M12 M~l U)-l 1m T

Now

M~2 M21 UX = (I - TX T)Ux = UX - TX T UX

= UX + TX U 5

( ) x x x because of formula 3.11. 50 for x E Ker 5 we have M12 M21 U x = U x. This yields the desired formula for Ker T.

Next, use that 1m T =' { (j) E S+ I N M~llj) E 1m 5}. We have

MX lj) = ___ 1 __ . f W(X)-l_W(,)-l lj)(A)dA lj) E S+ 21 2nl r A - , '

(see the proof of Lemma 3.4). We compute

N MX ( 1 }2 f (,-AJ.)-l P.B.( f W(,rl-W(Ar l lj)(A)dA\' j 21lj) = - 2n; J J 'Er ' - A r ,H. /I,

Put

CL =­\>

J

2 (-1 ( -1 , ( 1 ) f (,_A.)-l P.B. f W(,) -W A) lj)(A)dA,. 2'!Ti J J J r ' - A

rj \>

we have

( -1 -1 -1-1 W ,) -W(A) = C.(A-A~) (,-A~) B.

, - A J J J J

50

Page 109: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 99

1 2 -1 -1-1 a· = - (-.) f f (T-A.) p. B. C.(T-A~) {A-A~ BJ.·JA)dAdT. J 2m r. r. J J J J J J '+'\

J J

-1 (x)-l Using the identity (T-AJ.) B.C. T-A. J J J

( )-1 ( x)-l T-Aj - T-Aj •

we find that

a. 1 f P'(A-A~)-l B. tn(A)dA + J = 211i J J J 't'

rj

- -21. f P.P~(A_A~)-l BJ. !p(A)dA 111 r. J J J

J

1 f (-1 -1 = -2' P. A-A.) B. W(A) !p(A)dA + m r. J J J

J

1 f P.P~ (A-AxJ.)-l B !P(A)dA

- 2'lTi J J j r. J

For v; j we have ( x) -1 P. T-A. B.

J J J !p(A)dAdT T - A

one easily computes

Thus for j; v

a = - __ 1 __ . f PJ' PJ~ (A-AxJ.)-l BJ. !fl(A)dA

v 2111 r v

Page 110: Constructive Methods of Wiener-Hopf Factorization

100 Gohberg, Kaashoek, Lerer and Rodman

It follows that

- ---21. f P. P~(A_A~)-l B. ~(A)dA ~l r J J J J

Since N M~l ~ = col(N j M~l~)j=O' we get the desired description of 1m T. It remains to establish the formula for the generalized inverse TI of T. Substituting (3.19) in (3.23) and using the identities

x N M22 = RN ,

which follow from (3.16), we see that

TI = TX + UXR N MX _ UX NX M UX S1 N MX21 ' 21 21

and, using again (3.16),

Hence TI = TX + UX RA + UX(I - R S)SI A

where A = N M~l [J

In conclusion of this section we remark that using (3.16) - (3.24) one can also express Ker S, 1m Sand SI in terms of Ker T, 1mT and TI.

4. The barrier problem Let B be an admissible Banach space of V-valued functions, with

the direct sum decomposition B = B+ e B-, as in Section 2. For a continuous L(Y)-valued function W(A) defined on r, consider the Riemann barrier problem:

{ W(A_)W_(A) = ~+(+A)

(4.1) ~_ E B ~+ E B

, A E r

Page 111: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 101

Using Theorem 3.5, we shall find a general solution to the problem (4.1) in case W(A) is given by its realizations (1.1).

THEOREM 4.1. Let W(A) be given by

(4.2) W(A) = I + C.(A-A.)-l B. J J J

j=O, ... ,s,

wh~e the op~o~ Aj a4e ~uch that a(Aj)Orj = a(Aj-BjCj)nrj = 0 (j = 0, ... , s) . Let R be the JUg ht incUcatOf!. 0 6 \~ (A ) with fLU pect to m fLeaLi..zmon (4.2) and contOUlt r. Then the gf..Yt~a£ .wfu;(;,lon 06 the 1U.emann ba4JU~ pltObiem (4.1) ~ Mven by

~ (A) = C.(A-A~)-l x. + - J J J

(4.3) + W(A)-l(foj Ck(A-Akf l xk) , A E rj , (j = 0,1, ... ,s)

s -1 ~+(A) = I Ck(A-Ak) Xk ' A E r ,

k=O

wh~e coi[x j Jj =0 ~ an a4/U;tJr.aJty element in Ker R . ~ofu;(;,lon {~_,~+} 06 (4.1), then xO, ... ,xs in (4.3) deteJrJn-i.ned .

MOfLeov~, given a a4e UnA..quely

Proof. We shall use the notations introduced in Section 2. Observe that (~_, ~+) is a solution of (4.1) if and only if ~_ E Ker M22 . By Theorem 3.5,

Ker M22 = {F[~] I x E Ker R} • where

Take x E Ker R. Then Lemma 3.4 implies that

F[~] = M~l UX = MW-l Ux - TX Ux

= M UX + UX Rx = M -1 Ux w- l W

Page 112: Constructive Methods of Wiener-Hopf Factorization

102 Gohberg, Kaashoek, Lerer and Rodman

It follows that the general solution of (4.1) is given by

Ill_ = M -1 UX • Ill+ = UlI, • W

where x is an arbitrary element of Ker R. Since F is invertible. it is clear that x is uniquely determined by {IP_. IP+}. Further. one checks readily that Ill+ and Ill_ have the desired form. c

We remark that Theorem 4.1 allows us also to solve the following barrier problem:

(4.4) {W(A)~+:A) • ~_(A) ._ A E r

Ill+ E B Ill_ E B

Indeed. rewriting the barrier identity in (4.4) as W-l(A)Ill_(A)=ql+(A) we obtain a barrier problem of type (4.1) and Theorem ~.l is applicable. Thus. the general solution of (4.4) is given by the formulas

-1 1ll_(A) = Cj(A-Aj ) Xj + W(A)[k~j Ck(A-A~)-l xkJ • A E rj (j=O.l, ... ,s)

s -1 ~+(A) = I Ck("-A~) Xk • A € r

k=O where C01[xjJj=0 is an arbitrary element in Ker S.

5. Canonical factorization Let WeAl be a continuous L(Y)-valued function defined on r

and assume that W(A) is invertible for each" € r. A factorization

(5.1)

where W±(,,) are continuous L(Y)-valued functions on r. is called le6t canonical (with respect to r) if W_(,,) and W+(,,) admit analytic continuations in 0- and 0+. respectively. such that all operators W_(").,, € o-u r.and W+(").,, E o+U r.are invertible. Note that since 0 consists of (s+l) disjoint connected components 00,0; ••..• 0; • we can regard W~ as a collection of (s+l) analytic functions (0) W(l) W(S)df' d' - - - ' . 1 d h W _ • _ •.•.• _ e 1ne 1n 00' 01 •...• os' respectlVe y. an suc

Page 113: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 103

that the boundary values of w~i} on ri coincide with the restriction

W -' r i . On the other hand. as rt is connected. the restri cti ons vJ+1 r i (i=O,l •...• l) are the boundary values of the same analytic function

. + w+ 1n Q.

It is easy to see that a left canonical factorization. if exists, is unique up to ~ultiplication of W+(A) by a constant (i.e. independent of A) invertible operator E on the left and simultaneous multiplication of \( ().) by E- l on the ri ght.

Interchanging in (2.1) the places of W_ and W+' we obtain an analoqous definition of a ~ght canonical factorization W = W+ W_ of W.

The next theorem gives necessary and sufficient conditions for existence of canonical factorization as well as explicit formulas for the factors \~± in case I~().) admits realizations

(5.2) W().) = W.().) = I + e.().-A.)-lB. J J J J

j=O,l, .. ,s.

THEOREM 5.1. Let an L(Y)-valued 6unction W().) be de6~ed by 60Jtmu1a..6 (5.2) and tet a(Aj ) n rj = a(Aj) n rj = 0. Then W().) adrnili

~ght (Jt.Mp. teQ.tJ canonica.e. 6adoJch.a.U.on vJ-i.:th Jt.Mpect.to r .i6 and only .i6 :the ~ght (Jt.Mp. teQ.tJ .ind-ica:t.0Jt. R (Jt.Mp. SJ 06 w().) wU.h Jt.Mpect .to :the. Jt.eat.izaUol% (5.2) .i.b ~veJt:t..ibte.

16 S .i.b .inveJt:t..ibte, :then :the 6adofU W± .in (5.1) Me g.iven by :the 60Jtmuta.6

(5.3) W+().) = I + exS-l()._M)-l B. + ). E Q

(5.4) W_(')=I+[C;CX1{'-[:; B;Cl' [B; 1 MX J _S-l B

' )'EQj i=O, ... ,s,

wheJt.e M. MX. eX, B Me de6.ined by (1.3)-(1.6). AnatogoU1>ty,.i6 R .i.b .inveJt:t..ibte, :then :the 6actofU 06 :the ~ght

MnoMcat 6acto~zaUon w().) = W+().)W_ ().) Me glven by:the. 60Jtmut'.a.6

(5.5) W+().) = I - C()._M)-l R- l BX ). E Q+

Page 114: Constructive Methods of Wiener-Hopf Factorization

104

(5.6) WJl} I + [C. ,

Gohberg, Kaashoek, Lerer and Rodman

CR-1+[::C; :JJ -1 [:: 1 ,E"i' i-O, ... ,s,

wh~e C, BX ~e de6~ned by (1.8}-(1.9). Proof. By the results of Gohberg-Leiterer [8] the Toep1itz

operator T with the symbol W(l} is invertible if and only if W(l} admits left canonical factorization. By Theorem 2.1 the invertibility of T is equivalent to the invertibility of the left indicator of W(l}, and the part of the Theorem concerning existence of left canonical factor­ization follows. To prove the existence part for the right canonical factorization, observe that W(l) admits right canonical factorization if and only if W(l}-l admits a left one, and that the right indicator for W(l} coincides with the left indicator for W(l)-l.

flow we pass to the verification of formulas (5.3) - (5.6). Let W+ and W_ be given by (5.3) and (5.4), respectively.

Clearly, W+ is analytic in n+. Using (1.2), one easily verifies that

(5.7) W+(l)-l = I - CX(l_MX)-l S-l B ,

which is also analytic in n+. Multiplying

W(l) = I + C.(l-A.)-l B. ,l E r,., 1 1 ,

and W+(l)-l given by (5.7), it is easily seen that the product coincides with the formula (5.4). It remains to show that the function (5.4) and its inverse admit analytic continuations into nj.

The function (5.4) can be written as W (l) = I + C.(A-A.)-l(I - P.)B. + D(A)S-l B • 1 E r,. - " , ,

where -1 (-1 x( x)-l D(l) = [0 ... 0 Ci(l-Ai ) O ... O]S-Ci A-Ai} BiC l-M +

is a function whose values are operators ZX ~ Y. In order to prove that W_(l) admits analytic continuation in ni it suffices to show that

Page 115: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 105

D(A) has this property. Substituting the expressions for S, CX and

MX we obtain for each x € Imp~ (k r i):

D(A)X = Ci(A-Ai)-l[- 2:i ~. (~-Ai)-lBiCk(~-A~)-l Xd~] + , - [C.(A-A.)-l B. + I]Ck(A-Axk)-l x = , , ,

_ [1 (A-Ai)-l_(~-Ai)-l x -1 - Ci - 211i J II - A Bi Ck(ll-Ak)

r i

- [Ci(A-Ai)-l Bi + I]Ck (A_A~)-l x.

Xd ll ] +

Now (ll-A~)-lx is analytic in ni (because k I i), so

1 J 1 (X)-l ( x-l - 211i ll-A BiC k ll-Ak Xdll = BiCk A-Ak) x r i

and the expression for D(A)X takes the form

( x)-l [1 f J_I )-1 (x)-l ] Ck A-Ak x + Ci 211i r. ~ll-Ai BiCk ll-Ak Xdll . , Both summands obviously admit analytic continuations into ni . If

x x E ImP i ' then

( -1 ( )-1 ( X)-l ( x-l D(A)x = C,.[ A-A,.) Pl' - A-A. B.C. A-A. - A-A.) ] x = , ", , = c.[(A-A.)-l P. - (A_A.)-l] x , , , ,

which is again analytic in ni. The function whose values are the inverses of the values of (5.4)

can be written as follows:

(5.8) ( -1 W A) = I-[C. - ,

Using the equality (which follows from (1.2»

A E r .. ,

Page 116: Constructive Methods of Wiener-Hopf Factorization

106 Gohberg, Kaashoek, Lerer and Rodman

one can rewrite the expression for W_(A)-l in the form

o

o D(A)x = S P~(A-A~)-lB. - (A-Mf1 B[C'(A-A~f1 B. - I]

1 1 1 1 1 1

I 0

l 0

Further. one shows (as it was done in the preceding paragraph for D(A)) that PkD(A)x admits analytic continuation into 0i. for k = O.l •...• s. Hence W (A)-l is analytic in 0: as well. So. formulas (5.3) and

- 1 (5.4) are verified.

To verify (5.5) and (5.6), use the already proved formulas (5.3) and (5.4) for the left canonical factorization W(A)-l = W_(A)-l W+(A)-l of the inverse function

W(A)-l = I + C. (A-A~)-l (-B.) • A E r1• 1 1 1

Bearing in mind that the right indicator of W(A) coincides with the left indicator of W(A)-l. we obtain the inverses of the factors in the right canonical factorization W = W+W_:

(5.9) W+(A)-l = I + CR- l (A_Mx)-l BX A E 0+

(5.10) W_(A)-l = I + [C i C] {A_[A~ -BiC]J-1[-Bi ], A E oi

o M _R-1BX

Taking inverses of these realizations and using equality (1.7). we obtain

Page 117: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 107

the desired formulas (5.5) and (5.6). c

As an illustration of Theorem 5.1 consider the case of a scalar

function W(,,) (Le. Y = ~). Let Pj(resp. \lj) denote the number of poles (resp. zeros) of Wj (,,) in g;(j=O,l, ... ,s). If WI,,) admits canonical factorization, then, in view of Theorem 5.1, the indicator S is invertible and, in particular, dimZ = dimZx. This amounts to the equality

(5.11) s I p.

j=l J

s I \I.

j=l J

Conversely, using the fact that the indicator S satisfies the Lyapunov equation (1.2) and applying Corollary 1.3 from [9J, one easily sees that condition (5.11) implies the invertibi1ity of the indicator Sand, consequently, the existence of the canonical factorization of WI,,).

In conclusion of this section note that the smoothness assumptions imposed on rj in Section 1 are superfluous. This follows from the fact that for the purpose of canonical factorization the given contours r. can

J be replaced by smooth contours ~j which are sufficiently close to rj and meet all other hypothesis of Section 1.

6. Unbounded domains. The results of Sections 2-5 can be extended to the case of

unbounded domains g+. In this section we present results on canonical factorization for two types of unbounded domains g+'

Firstly, consider the case that g- = [~U{oo} ]~+ consists of a finite number of bounded connected components gj (j=l, ... ,s), whose boundaries rj are simple closed rectifiable non-intersecting contours

+ positively oriented with respect to g. The L(Y)-valued function W(,,) s

is again defined on r = u r· by realizations (1.1), and it is assumed j=l J

that a(A.)nr. = a(A~)nr. = 0. We define the spaces Z, ZX, the indicators J J J J x x x

R, S and the operators M, M , C, C , B, B precisely as in Section 1 with the only remark that the indices i, j, k which are involved in these definitions take values 1,2, ... ,s (but not the value 0). Then the following analogue of Theorem 5.1 holds.

THEOREt~ 6.1. UrtdeA the above M.6wnpWvu, artd rtotIrt-i.om the 6uI'lctiOrt WI,,) ~ JrJ.ght (JtMp. tefit) ca.rtOMcal. 6ac;toJUzo.:tlort ;"6 artd

Page 118: Constructive Methods of Wiener-Hopf Factorization

108 Gohberg, Kaashoek, Lerer and Rodman

on.ly -<-6 :the JUght {Jt.eAp. te6:t.J ,wcUc.a.toJt. R {Jt.eAp. S J -iA ,wveJLt).bi.e. 16 S -iA ,wveJLt).bi.e,:then :the. 6a.ctoM 06 :the. te.6:t. c.a.non..i.c.a..e. 6a.ctoJUza.:tWn and:t.heiJt. -<-nveMeA ate g-<-ve.n by 6o~uta.h (5.3), (5.4), (5.7), (5.8). Si.mU.a.Jl£y, -<-6 R -iA ,wveJLt).bi.e., then :the 6a.ctoM 06 :the JUght c.a.non..i.ca1'. 6a.ctoJUza.tion and :theiJt. ,wVeMeA ate g-<-ven by (5.5), (5.6), (5.9), (5.10).

Proof. Choose an auxiliary simple closed rectifiable contour ro bounding a bounded domain ~ such that r. c ~ (j=l,2, ... ,s). Denote

s S J ~ =~~.~ n jJ , ? = .~ rj (ra is positively oriented with respect to n+).

J-1 J-O ~ ~ ~ Introduce a new function W(A) on r by setting W(A) = W(A), A E r

~

and W(A) = I, AErO' We claim that W(A) admits right (resp. left) canonical factorization with respect to r if and only if W(A) admits

~

right (resp. left) canonical factorization with respect to r. Indeed, ~ ~ ~

assume that W(A) = 14 (A)W+(A), A E r is a left canonical factorization of ~ - ~ ~

W(A) and set WJA) = WJA) , W+(A) = W+(A) for A E r. Obviously, W(A) = W_(A)W+(A), A E r and the function W_(A),A E r meets the require­ments for the left factor in factorization (5.1). The factor W+(A), A E r admits an analytic and invertible extension in ~ and since ~ ~ 1 W+(A) = [WO(A)]- on rO the function W+(A) admits an analytic and invertible extension in the whole rt. Conversely, if W(A)=WJA)W+(A),

~

A E r is a left factorization of W(A), we define W+(A) = W+(A) for ~ ~ ~ 1

A E r , WJA) = WJA) for A E rand WJA) = [W+(A)r for AErO' ~ ~ . 'V 'V ~

Clearly, W(A) = W_(A)W+(A) , A E r and the functions W_(A) , W+(A) meet all the requirements for the factors in the canonical factorization of 'V W(A) .

Using Theorem 5.1 we conclude that W(A) admits left canonical factorization with respect to r if and only if the left indicator 'V 'Vx 'V 'V 'V S : Z + Z of W(A) with respect to r is invertible, where ~ s x s x Z = fl ImP. , Z = ~ ImP.. film'! observe that 1m Po = 1m Po = {O} i.e.

j=O J j=O J

Z'Vx = Zx = ~ I pX 'V S P . f h s'V w m ., Z = Z = I.!j 1m .. So, 1 n act, t e operator j=l J j=l J

~oincides with the left indicator S: ZX + Z of ~(A) with respect to r as defi ned at t:le very begi nni ng of thi s Section. Further, the fi rst paragraph of this proof shm'/s that the factorization factors of ~J(A) are 'V ~btained "s restrictio!is of W(A) to the contour r. Applying again

Page 119: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 109

"v "vx the observation that Z = Z ,Z = Z, we see that formulas (5.3), (5.4), (5.7), (5.8) hold with M, MX, eX, B as defined at the beginning of

this Section. The results for the right factorization are obtained similarly. 0

Secondly, we are interested in the case that D+ is a strip bounded by the lines r l = {A E tl 1m A = h} and r2 = {A E ~ I 1m A = -h} where h is some fixed positive number. Note that in this case the domains

Dl = {A E t I 1m A > h} and D2 = {A E t I 1m A <-h} are unbounded. Again we assume that the L(Y)-valued function W(A) is defined on r = r, ,) r2 by

A E r l (6.1)

and

Denote ZX = 1m P~ ~ 1m P~

where Pj (resp. Pj) stands for the spectral projection of Aj(resp. Aj) corresponding to the part of a{A.) (resp. of a(A~)) in D:(J = 1,2).

J X J ~ Introduce the operator matrices S: Z + Z and R: Z + Z as follows:

(6.3) S~r _ '1 11m ,;

L- 2!i_L{A-ih-A2)-lB2el{A-ih-A~)-lp~dA

(6.4)

R~ _ '1 11m '1 "i __ A+ih-Al 'l C, ,+ih-A, ',d' x - 1 "'I ( x)-l ( )-1 j 2;i_~{A-ih-A~rlB2Cl{A-ih-Alrlpl dA p~IIm P2

The operator S(resp. R) will be referred to as the lef,t (resp. Jr.1ght)

i~cato~ of W{A) with respect to r = rl U r2. We shall see that the existence of the left (resp. right) canonical factorization of W{A) is

Page 120: Constructive Methods of Wiener-Hopf Factorization

no Gohberg, Kaashoek, Lerer and Rodman

e'luivalent to the invertibility of the operator (6.~) (resp. (6.4)). Let us also define explicitly all other operators which will be involved in the formulas for the factorization factors:

Z+Z , r{= A~ I 1m P~

o

THEOREM 6.2. Let the L(Y)-vatued 6unction W(A) be de6~ed by (6.1) on the boun~y r = rlu r2 06 the ~tnip 0+ and~~ume that (6.2) hot~. Then W(A) admJ..:t6 teat tJt~p. JrJ..ghtJ canon.£.c.at 6actoJrJ..za-;ti.(m W-i;th Jt~pect to r '<'6 and only '<'6 the teM (Jt~p. JrJ..ghtJ ~dA..catoJt

s (Jt~p. RJ de6"Lned by (6.3) (Jt~p. (6.4)) ~ .<.nveJttibte. 16 S (JtMp.R J ~ '<'nveJttib1!.e, then the 6actoM W+ 06 the

teM (JtMp. JrJ..ghtJ canon.£.c.a.e. 6actoJrJ..ztLti.on aoo theA..Jt .<.nVeMM W: 1 Me g.<.ven by 60J1nlUl~ (5.3), (5.4), (5.7), (5.8) (Jt~p. (5.5), (5.6), (5.9),

(5.10)), wheJte the Op~OM M, MX, C, eX and B, BX Me de6.<.ned by (6.5) - (6.7).

Proof. Fi X some , > 0 and introduce the aux il i ary curves

rl ={AErlIIReAI :::dU{AEt IA-ihl =, 1mA ~ h} ,

1mA :::-h}

The orientation on r. is chosen to be consistent with the orientation J A

on r. (j=1,2) , i.e. r. is negatively oriented with respect to the J A J A

bounded domain oj whose boundary is rj (j=l,2):

Page 121: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 111

~---- . ( ~

fl n+

~v ~

fz

rz

Denote n+ = [t u roo}] , [~l u ~2] and choose a sufficiently large number T such that

(6.8) (j=1,2) .

Introduce the function W(A) on r = rl U r2 by setting

\Wl(A)=I+Cl(A-AlrlBl ' AErl

W(A) ='\ ! A -1 ~W2(A) = I + C2(A-A2) B2 ' A E r2

We claim that W(A) admits left canonical factorization with respect to r if and only if W(A) admits such factorization with respect to r. Indeed, let W(A) = W_(A)W+(A) (AE r) be a left canonical factorization of W with respect to r (such that W+{oo) = W_(oo) = I). Condition (6.8) implies that the functions W'(A) and [~JJ'(A)rl are analytic in the

- J domain ~~, ~ ~ and continuous in its closure. Hence the same property

J J [ -( )]-1 () r (. -1 -have the functions WJ. A W. A and LW, A)] W.(A) (j=1,2). Since _ -1 J J J

W+(>-) = [W j (>-)] W.(>-) for A E r. (j=l ,2), we conclude that W+(>-) admits an analytic ~nd invertible ~xtension in ;+ which is continuous in Q+ (and takes value I at infinity). Setting W±(A~ = W±(A),A E r" we o~viously obtain the,left canonical factor~zation A W(A~=W_(A)W+(A)~AEr)

of W \'Jith respect to r. Conversely, let W(A) = WJA)W+(A) , A E r be a left can~nical factorization of W relative to r (and W+(oo) = I). Set W+(A) = W+(A) for A E r. Clearly, W+(A) meets the requirements for the right factorization factor in (5.1). Further,define W_(A) = W~(A) = Wj(A)[W+(A)]-l , AErj (j=1,2).

Page 122: Constructive Methods of Wiener-Hopf Factorization

112 Gohberg, Kaashoek, Lerer and Rodman

Then W(A) = W_(A)W+(A) ,A€r and using again (6.8),one easily checks that W_ is analytic and invertible in the domain g- and continuous in its closure.

Using Theorem 6.1 we infer that W(A) admits left canonical factorization with respect to r if and only if the left indicator S of

A

W relative to r is invertible, where

S=

1 f ( )-1 ( x)-l x - 2niA A-A2 B2C1 A-A1 P1dA r2

Note that the spe:tra1 projections Pj (resp. Pj) (j=l,2) remain unchanged for any contour r satisfying (6.S) and coincide with the spectral projections of the operators Aj (resp. Aj) corresponding to a(Aj ) n njA (resp. a(Aj) n nj) (j=1.2). A1so'Aone easily sees that the operator S is one and the same for any contour r satisfying (6.8) and, in fact, S = S as defined by (6.3). Now the formulas for the factorization factors and their inverses follow from Theorem 6.1. The assertions about right factorization are obtained similarly. c

7.. The pair eguation. Let Y be a Banach space. Consider the equation of the fol1ow-

ing type: co

~(t) - f kl(t-s)~(s)ds = g(t) t < 0 -co

(7.1) .,.

~(t) - f k2(t-s)~(s)ds = g(t) t > 0 -co

Here g(t) is a given V-valued function defined on the real line such that

j eh]t]] ]g(t)]]dt < co

where h? 0 is a fixed number; in short, g(t) € e-h]t]L1OR;Y). The unknown function ~(t) is also assumed to belong to the Banach space e-hltILl(~;Y)' We assume also that the kernels kl(t) and k2(t) have

Page 123: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 113

the properties that e-htkl(t) and ehtk2(t) are L(Y)-valued Ll-functions on the real line, and their Fourier transforms are analytic in a neighbor­hood of the real line and infinity. This condition was expressed in terms of the functions kj(t) themselves in [4]. It follows (see Theorem 1.2 in [2] for the case h = 0) that kj(t) (j=1,2) admit exponential representa ti ons

t > 0 ,

t < 0 ,

t > 0,

t < 0,

where BJ.: Y + X., A. : X. + X., C. : X. -+ Yare (bounded linear) J J J J J J

operators (with Xl' X2 denoting some Banach spaces) such that

(7.2) cr(A)nrj=¢ (j=1,2)

and Pj is the Riesz projection of Aj corresponding to the part of the spectrum of Aj in the halfplane nj (j=1.2). Here rj , nj (j=1.2) are defined as in the part of Section 6 which deals with canonical factorization with respect to the strip n+. In the rest of this Section we use also all other notations introduced in this part of Section 6.

Note that the conditions imposed on the functions kj(t) (j=1,2) imply that the operator E defined by the left hand side of (7. 1 ) :

fq:>(t) - [,,,k l (t-s)(!l(s)ds = get) t < 0 ,

(7.3) (ElP) (t) = 1 lP(t) - [ook2(t-S)lP(S)dS = get) t > 0 ,

is a bounded linear operator acting e-hltILl~'Y) -+ e-hltILl(ffi,Y).

Page 124: Constructive Methods of Wiener-Hopf Factorization

114 Gohberg, Kaashoek, Lerer and Rodman

NDI,/ '.'Ie start solving equation (7.1). Introduce two functions

I/Il(t) and lJi2(t) as follov/s:

o

-I kl (t-s)\Il(s)ds

Denote also r 9 (t)

(7.4) 9 (t) 1 "10

{ g(t)

92(t) =

o

(7.5)

t < 0

t > 0

t > 0

t < 0

t < 0 ,

t > 0 ,

t > 0 ,

t < 0 .

With these notations (7.1) can be rewritten in the form

jtV(t) - !ookl (t-s)\Il(s)ds = 91 (t) + iPl (t), -co < t < 00

(7.6) 00

\Il(t) - !ook2(t-S)\Il(s)dS = g2(t) + iP2(t). _00 < t < 00.

Multiply the first equation by e-ht and apply the Fourier transform

x(t) +7 eiAt x(t)dt to both sides: -00

where the capital letters denote the Fourier transform of the function denoted by a corresponding lower case letter. Note that ~(A) is

Page 125: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 115

analytic in the strip n+'~l(A) is analytic in the halfplane ni and Gl (),) is analytic in n+ u n 2 .

Similarly, mu1typ1ing the second equation in (7.6) by eht and taking the Fourier transform we obtain

Here ~2(A) is analytic in Q; and G2(A) is analytic in Introduce the contourwise operator function W(A)

r = {A lImA = ±h} as

+ --= n 'j n 1 •

defined on

A E fl '

W(A) =

Then equations (7.7) and (7.8) can be interpreted as the barri er problem

where cfJ+(A) = cfJ(A) ---:j:

(A En)

r ~l (A) A E f,

1J'l '1 'l'2(A) A E f2

r'l A E f1

(7.10) G(A) =

G2(A) A E f2

Assume in addition that cr(A j - B}j) n fj = 0 (j=l,2). Then the barrier problem (7.9) has a unique solution ~+(A) for every g(t) E e-hltILl(IR,V), with G(A) obtained from g(t) by (7.4), (7.5) and

Page 126: Constructive Methods of Wiener-Hopf Factorization

116 Gohberg, Kaashoek, Lerer and Rodman

(7.10) if and only if the operator function W(A) admits left canonical factorization with respect to r = rl U r2 (see [8J). This observation allows us to use Theorem 6.2 in order to express the criterium for unique solvability of (7.9) and hence of (7.1) and produce formulas for the solution in terms of the left indicator S of W(A) with respect to r defined by (6.3).

Using the notations (6.5) - (6.7) we define the following

functi ons { [AX i[Cl .CXS-lJ(I-Pl)exp -it 1

BC l wl(t)

t > ° .

° t < ° 0. t > °

° t < 0.

M -B

where Pj is the spectral projection of [Aj MOl

BC j

corresponding to the domain nj (j=1.2). Let _itAx

iC 2e 2 P~ S-l B t > ° , r(t)

t < ° , where P~ is understood here as the operator ZX + 1mp~ which is identity on the first component of ZX = 1m P~ tl 1m P~ and zero on the second. The operator P~ is interpreted analogously.

Page 127: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 117

THEOREM 7.1 The OpeJr.a:tofL E deMn.ed by (7.3).{J.J btvvr:Uble

-t6 and amy -t6 :the tent -tn.dJ..cMofL S de6bted by (6.3) -U -tttveJr.tible. 16

:t!UJ., condUJ..on hotd.J., :then 6M any g(t) E e-hltILl(lR;Y) :the unJ..que Miution

06 (7.1) .{J.J g-tven by :the 60~u.e.a

~(t) = g(t) + J y(t,s)g(s)ds ,

wheJl.e

y(t,s) = r(t-s) + w(t-s) + J r(t-u)w(u,s)du ,

and t < 0

t > 0

Proof. The first assertion of the theorem is already proved.

Let S be invertible. Then the inverses of the factors in the left

canonical factorization W(A) = WJA)W+(A) (A€f) are given by formulas

(5.7) and (5.8). As W (A)-l is analytic in ~l u ~2' we have

[ej' e' S-ljpj H::j :f

and hence

1 r A~ °1} [B. j J J x _1'V. 'V

[C j , C S JP j exp -lti Pj = 0

I BC· M I -B _ J .

Now one checks easily that

I + j eiAt w.(t)dt , A E f J. -00 J

j = 1,2.

Page 128: Constructive Methods of Wiener-Hopf Factorization

118 Gohberg, Kaashoek, Lerer and Rodman

Therefore, for A E rj we obtain

W_(A)-l Gj(A) = (I + j eiAt wj(t)dt) j gj(t)eiAt dt = -00 -00

= j [g.(t) + j w.(t - s)g.(s)ds]eiAt dt . -00 J -co J J

For an L(Y)-valued function A(A) which is analytic on r l U r2 and has the form

A(A) = j a.(t)eiAt dt -co J

j = 1,2

for some L(Y)-valued functions al(t) and a2(t), let Q+A(A) be the function which admits analytic continuation into n+, has value 0 at infinity, and the difference A(A) - Q+A(A) , A E r. admits analytic _ J continuation into nj (j = 1,2). It is easily seen that

Q+A(A) = ~ a1(t)eiAt dt + j a2(t)eiAt dt . -co 0

Applying this formula with A(A) = W_(A)-l G(A), we have

(7.11)

(7.12)

1 0 00 'At Q+W_(A)- G(A) = f [91 (t)+f wl(t - S)91(s)ds]e1 dt +

_00 -00

+ j [g2(t) + j w2(t - s)g2(s)ds]eiAt dt = o -00

= j [get) + j w(t,s)g(s)ds]eiAt dt -00 -00

On the other hand,

W+(A)-l = I + jr(t)eiAt dt + , A € n

Now observe that the Fourier transform W+(A) of the desired function ~(A) is given by the formula

and use (7.11) and (7.12) to derive the required formula for ~(t). c

Page 129: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols

8. Wiener-Hopf eguation with two kernels. Consider the equation

00 0

119

(8.1) lj>(t) - I k1 (t-s)lj>(s)ds - I k2(t-s)<P(s)ds = get) • -00 < t < 00 • o _00

where get) is a given function which belongs to the Banach space ehltILl(R;V) of all V-valued functions f(t). -00 < t < 00 such that

j e-hltl I If(iJI Iclt < 00 •

-00

(As before. V is a fixed Banach space and h > 0 is a fixed constant). The solution lj>(t) is sought also in ehltIL1(m;V).

tions We assume that the kernels k,. k2 have exponential respresenta-

k. (t) = J

-i tA. i C. e J(I - Q.)B.

J J J t > 0

t < 0

for j = 1,2, where B. : V + X., AJ· : X. + X., C. : X. + V are linear J J J J J J

bounded operators (with some Banach spaces Xl and X2) such that

(Here we continue to use the notations introduced in the second part of Section 6). The projection Qj(j=l,2) is the spectral projection corresponding to the part of spectra of A. (j=l,2) lying in the ha1fp1ane ni. These requirements ensure, in partic~lar, that kj E e-hltIL1(m;L(V» (j=l,2) and the operator F defined by the left hand side of (8.1):

00 0 (8.2) (Flj»(t) = lj>{t) - J k1{t - s)lj>{s)ds - J k2(t-s)lj>(s)ds,

o -00

is a linear bounded operator acting in the space ehltIL1(~'V).

Using the procedure from + 2 of the Appendix in [6J, one can reduce (6.1) to the following barrier problem:

Page 130: Constructive Methods of Wiener-Hopf Factorization

120 Gohberg, Kaashoek, Lerer and Rodman

(8.3)

Here

I - Kj(A) = I + Cj(A - Aj)-l Bj

are analytic operator functions in -;-+ ,

(j=1,2)

G(l)(A) = j g(t)d iAt dt ; G(2)(A) = - J g(t)eiAt dt o -~

(so ~~j)(A) and G(j)(A) are analytic in Qj , j = 1,2); ~+(A) is

analytic in rr Introduce the contourwise operator function

W(A) = I + C.(A - A.)-l B. J J J

j = 1,2.

As follows from the results in [8], the barrier problem (8.3),and hence the equation (8.1), has a unique solution for every 9(t)EehltIL10R;Y) if and only if the function W(A) admits the right canonical factorization:

(8.4)

Hence we can invoke Theorem 6.2 and obtain that the operator F defined by (8.2) is invertible if and only if the right indicator R given by (6.4) is invertible. Further, assume W(A) admits the factorization

(8.4). Then one has

(8.5) j = 1,2,

where for every L(Y)-valued function A(A) which is analytic on rl U r2 we define the L(Y)-valued function Q_A{A) by the properties that Q_A(A) is analytic in ~i U ~2 and has value zero at infinity, while

Page 131: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 121

A(A) - Q A(A) -1- -1

for W+ ' W_

+ is analytic in Q. According to Theorem 6.2 the formulas are provided by (5.9) and (5.10), and one can easily

check that

W+(A)-l I + j r(t)e iAt dt -00

where

r(t)

Also,

(8.6) W (A)-l = I + j w.(t)eiAt dt - -00 J

where

li[C1 C](I - P1IeXP{-itl:; w1 (t) =

o t < 0

w2(t)

~;[C2 C](I - P21exp -it 02

'V

and Pj is the spectral projection of

A~ J

o

corresponding to the domain Qj (j = 1,2).

+ A E Q

t > 0

t < 0

AEQj;j 1,2,

• t > 0

, t < 0,

With these definition we have the following result.

Page 132: Constructive Methods of Wiener-Hopf Factorization

122 Gohberg, Kaashoek, Lerer and Rodman

THEOREM B.l. The. opeJr..atOft F de.6hte.d by (B.2) -i..6 htveJz-Uble. J...6 and orr-f.y .i.6 the. Jt.i.ght .i.ncUca.:toft R de.Mne.d b!/ (6. 4) -i..6 .i.nveJz-Ub£.e.. 16 th-W contii;(;,ton ho£.d.6, the.n 60ft any get) E eh1tlL, (IR,Y) the. un.Lque. 1l0fution <p{t) E eh1tI L,{IR,y) 06 the. e.qua;Uon (B.1) -i..6 g.Lve.n by the.

60JU/lu£.a 00 00

(B.7) <p{t) = get) + J ret - s)g{s)ds + Jy{t,s)g{s)ds,

t w~{t,s) + 6 w1(t - v)r(v - s)dv t > a

yet,s) = a

W~(t,s) + J w2(t - v)r(v - s)dv t < a t

and

{Wl (t - s) , s > 0; a {W2 (t - s)

W~{t,s) = w2(t,s) =

a , s < a a

, 5 < 0

, s > a .

Proof. We have

W~l{A) G(2){A) = J [-g2{t) - ~ ret - s)g(s)ds]eiAt dt. A E r2 -co -00

where

, t > a {a ,

g2{t) = get),

t > a ;

• t < a t < a .

One checks easily that

Page 133: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 123

Q_(W~lG(2)) = 1 [- g2(t) - j r(t - s)g(s)ds]eiAt dt -00 -00

Now in view of formulas (S.5) and (S.6) we have to check the equality

(S.S) J eiAt[g(t) + !oor{t - s)g(s)ds + !ooy(t,S)9{S)dS]dt =

= j eiAt[gl(t) + j r(t - s)g(s)ds]dt + o -O'J

+ j wl(t)eiAt dt.j [gl(t) + j r(t - s)g(s)ds]eiAt dt , o 0 -0..

as well as the analogous equality for Q~2)(A). We shall indicate only how to verify (S.S), or, equivalently, the formula

oc·too t 0 J e1A J [J wl{t - v)r{v - s)dv + wl(t,s)]g(s)ds dt = o _00 0

It is enough to check that

O'J. 00 t J e1At J [J wl(t - v)r{v - s)dv]g(s)ds dt = o -00 0

and

both equalities are easily verifiable. c 9. The discrete case. Results analogous to those obtained in Sections 7 and S hold also

for the discrete counterparts of equation (7.1) and (S.l). Namely. for a fi xed number h > 1, consi der the equati on,s

Page 134: Constructive Methods of Wiener-Hopf Factorization

124 Gohberg, Kaashoek, Lerer and Rodman

(9.1)

lP· -)' b. k lPk = g. J k=-oo J- J

j < 0 ,

where {gj}j=-oo is a given V-valued sequence such that

< 00

( 00 -Ij I ()) 00 in short, {g.}._ E h R.l Y , {((>.}._ is a V-valued sequence from I" I J J--OO J J--oo h- J R.l(Y) to be found, and {aj}j=_oo' {bj}j=_oo are L{Y)-valued sequences such that

00 • 00 •

Assuming that the function Wl(A) = I AJ bj and W2{A) = I AJ aj are J=-oo J=-oo

analytic and invertible in a neighbourhood of r l = {A I IAI = h-1} and

r2 = {A I IAI = h}, there exist realizations

(9.2) W.(A) = I + C.(A - A.)-lB. 1 1 1 1

i = 1,2

such that a(Ai ) n r i = a(A i - BiC i ) n ri =~. It turns out that the

operator E: h- 1jl R.1{Y) + h-ljlR.l(Y) defined by E{lP.}~_ = {g.}~_ , J J--oo J J--oo where gj are given by (9.1) is invertible if and only if the left indicator S of the realizations (9.2) with respect to r = rl U r2 is invertible. One can write down explicit formulas for the solution of (9.1 ) . t f th tAB C and 5-1. 1 n erms 0 e opera ors i' i' i

The discrete analogue of equation (8.1) is 00

(9.3) lPj - L aJ·_ k lPk -)' bJ·_k lPk = gk ' -00 < j < 00 ,

k=O k=-l where {9j}j=_oo' {lPj}j'=_ooE hljlR.l(Y) and {aj}j'=_oo' {bj}j'=_ooEh-ljlt1(L(Y)).

The equation (9.3) can be analysed analogously to the analysis of (8.1) in the preceding section, with

Page 135: Constructive Methods of Wiener-Hopf Factorization

Contourwise rational symbols 125

co

I A jbj and W2(A) j=-oo

REFERENCES 1. Bart H., Gohberg, I., Kaashoek, M.A.: Minimal factorization of

matrix and operator functions. Operator Theory: Advances and Applications, vol. 1, Birkhauser Verlag, Basel, 1979.

2. Bart H., Gohberg, I., Kaashoek, M.A.: Wiener-Hopf integral equa­tions, Toeplitz matrices and linear systems. In: Toep1itz Centennial. (Ed. I. Gohberg), Operator Theory: Advances and Applications, vol. 4, Birkhauser Verlag, Basel, 1982; pp. 85-135.

3. Bart, H., Gohberg, I., Kaashoek, M.A.: The coupling method for solving integral equations. In: Topics in Operator Theory, Systems and Networks, the Rehovot Workshop (Ed. H. Dym, I. Gohberg). Operator Theory: Advances and Applications, vol. 12, Birkhauser Verlag, Basel, 1984, pp.39-73.

4. Bart, H. Kroon, L.S.: An indicator for Wiener-Hopf integral equa­tions with invertible analytic symbol. Integral Equations and Operator Theory, 6/1 (1983), 1-20.

5. Da1eckii, Iu. L., Krein,M.G.: Stability of solutions of differen­tial equations in Banach space. Amer. Math. Soc. Transl. 43, American Mathematical Society, Providence R.I., 1974.

6. Gohberg, I.C., Feldman, I.A.: Convolution equations and projection methods of their solution. Amer. Math. Soc. Transl. 41, American Mathematical Society, Providence, R.I., 1974.

7. Gohberg, I., Kaashoek, M.A., Lerer, L., Rodman, L.: Minimal divi­sors of rational matrix functions with prescribed zero and pole structure. In: Topics in Operator Theory, Systems and Networks, The Rehovot Workshop (Ed. H. Dym, I. Gohberg). Operator theory: Advances and Applications, vol. 12, Birkhauser Verlag, Basel, 1984, pp. 241-275.

8. Gohberg, I.C., Leiterer, I.: A criterion for factorization of operator functions with respect to a contour. Sov. Math. Doklady 14, No. 2(1973),425-429.

9. Gohberg, I., Lerer, L., Rodman, L.: Wiener-Hopf factorization of piecewise matrix polynomials. Linear Algebra and Appl. 52/53 0983),315-350.

I. Gohberg, School of Mathematical Science, Tel-Aviv University, Tel-Aviv, I s rae 1

M.A. Kaashoek, Subfaculteit Wiskunde en Informatica Vrije Universiteit, 1007 MC Amsterdam, The Netherlands

Page 136: Constructive Methods of Wiener-Hopf Factorization

126 Gohberg, Kaashoek, Lerer and Rodman

L. Lerer, Department of Mathematics, Technion-Israel Institute of Technology, Haifa, Israel.

L. Rodman, School of Mathematical Science Tel-Aviv University, Tel-Aviv. Israel.

Page 137: Constructive Methods of Wiener-Hopf Factorization

Operator Theory: Advances and Applications, Vol. 21 ©1986 Birkhauser Verlag Basel

CANONICAL PSEUDO-SPECTRAL FACTORIZATION AND WIENER-HOPF INTEGRAL EQUATIONS

Leen Roozemond 1)

127

Wiener-Hopf integral equations with rational matrix symbols that have zeros on the real line are studied. The concept of canonical pseudo­spectral factorization is introduced, and all possible factorizations of this type are described in terms of realizations of the symbol and certain sup­porting projections. With each canonical pseudo-spectral factorization is related a pseudo-resolvent kernel, which satisfies the resolvent identities and is used to introduce spaces of unique solvability.

O. INTRODUCTION In this paper we study the invertibility properties of the vector­

valued Wiener-Hopf integral equation

(0.1) (j)(t) - f; k(t - s)(j)(s)ds = f(t), t ~ 0,

assuming the equation is of so-called non-normal type, which means (see [8], [6, § III.12], [7], and the references there) that the symbol has singularities on the real line. We assume additionally that the symbol is rational, and in our analysis we follow the approach of [1], which is based on realization.

First, let us recall the main features of the theory developed in [1, § IV.S] (see also [2]), for the case when the symbol has no singularities on the real.line. Take k E L~xm(_oo,oo), and assume that the symbol W(A) = I - t k(t)e'Atdt is a rational mxm matrix function. The symbol can be

-00

realized as a transfer function, i.e., it can be written in the form

(0.2) W(A) = I + C(A - Af1B, _00 < A < 00,

where A is a square matrix of order n, say, with no real eigenvalues and B and C are matrices of size n x m and m x n, respectively. In [1] it is assumed that detW(A) has no real' zeros, which is equivalent to the condition that

1) Research supported by the Netherlands Organization for the Advancement of Pure Research (Z.W.O.).

Page 138: Constructive Methods of Wiener-Hopf Factorization

128 Roozemond

AX := A - BC has no eigenvalues on the real 1 ine. It is known (see [5]) that for each f E L~[O,oo) the equation (0.1)

has a unique solution (j) E L~[O,oo) if and only if det W(A) has no zeros on the real line and relative to the real line W admits a (right) canonical Wiener­Hopf factorization

(0.3) W(A) = W_(A)W+(A).

The latter means that W_(>") and W_(>..)-l are holomorphic in the open lower half plane and continuous up to the real line, while W+(>..) and W+(>..)-l are holo­morphic in the open upper half plane and also continuous up to the real line. Furthermore we may take W_(oo) = W+(oo) = I.

In terms of the realization (0.2) a canonical Wiener-Hopf factori­zation exists if and only if on ¢n (with n the order of A) there exists a sup­porting projection II (i.e., Ker II is invariant under A and 1m II is invariant under AX) such that relative to the decomposition ¢n = Ker II e 1m II the matrices A and AX admit the following partitioning:

(0.4) _ [Ai 0 1 A - , * A2J

with the extra property that the eigenvalues of Al and Ai are in the open upper half plane and those of A2 and A2 in the open lower half plane. Further­more, if such a supporting projection II exists, then for the factors in (0.3) one may take

W (A) = I+C(A-A)-l(I-II)B, (0.5)

W+(A) = 1+ CII(A - AflB, and for each f E L~[O,oo) the unique solution (j) E L~[O,oo) of (0.1) is given by

(0.6) (j)(t) = f(t) + f; g(t,s)f(s)ds, t ~ 0,

where the resolvent kernel g is given by

{iCe -itAX lIe isAx B,

g(t,s) = _ ·tAX . AX . - i Ce 1 (I - II) e 1 s B,

o $ s < t, (0,7)

o $ t < s.

In this paper we show that with appropriate modifications and the right understanding the theory described above can be carried over to the case when the (determinant of the) symbol W(>..) has zeros on the real line. To do this, we first of all replace the notion of canonical Wiener-Hopf factori­zation by the notion of canonical pseudo-R-spectral factorization. This

Page 139: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral factorization 129

means a factorization of the symbol W(A) of the form (0.3), where the factors W_(A), W+(A) and their inverses W_(A)-1, W+(A)-1 have the same properties as

-1 -1 before except now we do not require the inverses W_(A) and W+(A) to be continuous up to the real line. In other words we allow the factors W_(A) and W+(A) to have real zeros. In general, in contrast with canonical Wiener-Hopf factorizations, there may be many different non-equivalent canonical pseude-R­spectral factorizations.

In this paper we describe how to get all canonical pseudo-R-spectral factorizations in terms of the real ization (0.2). Recall that det W(A) has zeros on the real line if and only if AX has real eigenvalues. To find the canonical pseudo-R-spectral factorizations of W(A) one has to split the spectral subspaces corresponding to the eigenvalues of AX on the real line. In fact we prove that a canonical pseudo-R-spectral factorization exists if and only if there exists a supporting projection IT with the same properties as before, except now we have to allow that in (0.4) the entries Ai and A~ also have eigenvalues on the real line. If one has such a supporting pro­jection IT, then the factors W+ and W_ in the corresponding canonical pseudo­R-spectral factorization are again given by (0.5).

We also show that given a supporting projection IT corresponding to a canonical pseudo-R-spectral factorization, then (0.7) defines a kernel which satisfies the following resolvent identities:

(0.8) g(t,s) - I; k(t - u)g(u,s)du = k(t - s),

k(t - s) - I; g(t,u)k(u - s)du = g(t,s),

s ~ 0, t ~ 0,

s ~ 0, t ~ O.

Let K and G be the integra 1 operators with kernel k( t - s) and g (t, s). We use the resolvent identities (0.8) and the specific form of the kernel g(t,s) to introduce spaces of unique solvability. This means that in these spaces equation (0.1) is again uniquely solvable and the solution of (0.1) is given by (0.6). Also in [8], [6] and [7] spaces of unique solvability appear, but because of the use of the realization (0.2), the spaces that we derive in our analysis admit a more detailed description.

A few words about the organization of the paper. The paper consists of six sections. In the first section we introduce the notion of canonical pseudo-r-spectral factorization for arbitrary matrix functions and arbitrary Cauchy contours r. We introduce pseudo-r-spectral subspaces in Section 2. Subspaces of this type will be used later in the contruction of the

Page 140: Constructive Methods of Wiener-Hopf Factorization

130 Roozemond

factorizations. In the third section we give a description of all canonical pseudo-r-spectral factorizations in terms of realizations. A special case, non-negative rational matrix functions, is treated in Section 4. In Sections 5 and 6 we study Wiener-Hopf integral equations of non-normal type with rational symbol and we prove the results mentioned in the previous paragraph.

1. CANONICAL PSEUDO-SPECTRAL FACTORIZATIONS To define canonical pseudo-spectral factorizations we need the

notions of minimal factorization and local degree (see [1, Chapter IV]). Let W be a rational m x m matrix function, and let AO E ¢. In a deleted neighbour­hood of AO we have the following expansion

(1.1 ) W(A) = or (A - AO)jWJo. J=-q

Here q is some positive integer. By the toeal deg~ee of W at AO we mean the number O(W;AO) = rank rI, where

rI =

o .... 0 W_q

The number 6(W;>"O) is independent of q, as long as (1.1) holds. We define 6(W;oo) = 6(W;0), where W(A) = W(I/A). Note that W is analytic in ~ E C U {oo} if and only is 6(W;~) = O. It is well-known (see [1. Chapter IV]) that the local degree has a sublogarithmic property. i.e., whenever WI and W2 are rational m x m matrix functions, we have o(W1W2;AO) ~ o(Wl ;>"0) + O(W2;AO) for each AO E ¢ U {oo}. A factorization W(>") = W1(>")W2(>") is called minimal at AO if o(W1W2;AO) = o(W1;AO) + O(W2;AO)' and minimal if o(W1W2;A) =

o(Wl;A) + O(W2;A) for all A E ¢ U {oo}. In other words. a factorization W(A) W1(A)W2(A) is minimal if it is minimal at AO for all AO E ¢ U {oo}.

Let W be a rational m x m matrix function given by the expansion (1.1). We call AO a z~o of W if in ¢n there exist vectors xO •... ,xq,xO F 0, such that

W_qx i + ... +W_q+ixO = 0 (i = 0, ...• q).

Note that a matrix function may have a pole and a zero at the same point. If detW(A) does not vanish identically, then AO is a zero of W if and only if

Page 141: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral factorization 131

AO is a pole of W(A)-I. Minimality of a factorization W(A) = W1(A)W2(A) can be

understood as the absence of pole-zero cancellations (see [1, Theorem 4.6l).

We shall consider spectral factorizations with respect to a curve r.

Throughout this paper r is a Cauchy contour on the Riemann sphere ¢ u fool. Thus r is the positively oriented poundary of an open set with a finite number of non-intersecting closed rectifiable Jordan curves. We denote the inner

(resp. outer) domain of r by Q; (resp. Q~). Associated with r is the curve -r. As sets rand -r coincide, but they have opposite orientations. I.e., the inner (resp. outer) domain of -r is Q~ (resp. Q;).

A rational m x m matrix function W admits a (JUglLt) c.aYlorUc.a1 pMudo­

r-~pectAa1 6actoJUzat~OYl if W can be represented in the form

where

(a) W_ and W+ are rational matrix functions, W_ has no poles and no

zeros in Q~, W+ has no poles and no zeros in Q;; (b) the factorization (1.2) is minimal at each point of r.

Since W+ (resp. W_) has no poles nor zeros in Q; (resp. Q~), the factorization (1.2) is minimal at each point of Q; u Q~. Hence condition (b) can be replaced by

(b)' the factorization (1.2) is minimal.

Comparing the definitions of canonical pseudo-r-spectral factori­

zation and canonical Wiener-Hopf factorization, two major differences appear. First of all, canonical Wiener-Hopf factorization is only defined for rational m x m matrix functions with no poles and no zeros on r. Secondly, the factors in a canonical Wiener-Hopf factorization are required to be continuous up to the boundary r.

If a rational mxm matrix function ~I has no poles and no zeros on the curve r, the notions of canonical pseudo-r-spectral factorization and canonical Wiener-Hopf factorization coincide. To see this, suppose W admits

a canonical pseudo-r-spectral factorization with factors W_ and W+. If W has no poles and no zeros on r, then, because of the minimality condition (b),

-1 the factors W_ and W+ cannot have poles or zeros on r. Hence W_, W+, W_ and -1 W+ are continuous up to the boundary r, and W(A) = W_(A)W+(A) is a canonical

Wiener-Hopf factorization with respect to f.

Let W be a rational m x m matrix function. Two canonical pseudo-

Page 142: Constructive Methods of Wiener-Hopf Factorization

132 Roozemond

r-spectral factorizations W(\) = W_(A)W+(\) and W(\) = W_(\)W+(\) are called equ.-lvaleYlt if there exists an invertible constant mxm matrix E, such that

~ -1~ W_(\) = W_(\)E, W+(\) = E W+(\). If W admits a canonical Wiener-Hopf factorization, then all canonical Wiener-Hopf factorizations are equivalent (cf. [5]). This is not true for canonical pseudo-r-spectral factorizations, as the following examples show.

EXAMPLE 1.1. Let \ 3iA

H2i (\-i )(\+2i) (1.3 ) W(\)

0 \ \-i

Then W is a rational 2 x 2 matrix function, with poles in i and -2i, and a zero in O. Note that W(oo) = (6 n The matrix function W has many non-equivalent canonical pseudo-R-spectral factorizations. Indeed, put

\-i (l+a) i (l-;a) 1 \-1 \-1

W~a)(\) =

-ia ~ J ' \-i \-1

Hia lfurll wia ) (\)

I+2T \+ 1

= Hi (2-a} j . ia

TI7f \+2;

The function w(a) has a pole in i, and a zero in O. The function wia ) has a pole in -2i, and a zero in O. A straightforward computation shows that W(\) = w~a)(\)Wia)(\), and obviously this factorization is minimal since there are no pole-zero cancellations. The factorizations W(\) = w~a)(\)Wla)(\) and W(\) = w~S)(\)WlS)(\) are not equivalent whenever a # S. Indeed, we compute

Clearly, this is not constant whenever a # S.

Hi(a-S) \

-i (a-S) \

\-i (a-S) \

EXAMPLE 1.2. (The scalar case) Let W be a rational (scalar) function, with W(oo) = 1. We can write

Page 143: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral factorization 133

i i-I A +bi _1A + ... +b1A+bO I i A +ai _1A + ... +a1A+aO

W(A) =

for certain complex numbers aO"" ,ai-1,bO, ... ,bi-1' We assume that the poly-i i-I i i-I nomial s p(A) = A + bi _1A + ... + b1 A+ bO and q(A) = A + ai _1A + ... + a1 A + aO

do not have common zeros. Let f be a contour on the Riemann sphere ¢ u {oo}. We write p(A)

(A-a1) ... (A-am1)(A-Y1) ... (A-Ym2)(A-B1) ... (a-Sm3) and q(A) = ( XX x x x x A - ( 1) ... (A - anI )( A - Y 1) ... (A - Y n2)( A - B1) ... (A - Bn3 ). Here

x x . n+ 0 R oX BX . n- d a1,··· 'tXm1 ,a~, ... ,a~l are 1n "f' fJ1"" ''1113 , fJ1"'" n3 are 1n "f' an Y1'''''Ym2'Y1'''''Yn2 are on f. We have m1 +m2+m3 = n1 +n2+n3 = i.

Suppose W(A) = W_(A)W+(A) is a canonical pseudo-f-spectral factori­zation, and W_(oo) = W+(oo) = 1. We write W_(A) = p_(A)q_(A)-l, W+(A) = P+(A)q+(A)-l, for certain polynomials p_, q_, p+ and q+. We assume that p_ and q_, and p+ and q+ do not have common zeros. Since the factorization W(A) W_(A)W+(A) is minimal at each A E f, we have p(A) = p_(A)P+(A), q(A) = qJA)q+(A). Furthermore, since WJoo) = W+(oo) = 1, we have degp_ = degq_,

deg p+ = deg q+. The zeros of p_ are in the set {a1 ,··· ,amI 'Y1"" ' Ym 2}' and a1,.·.,tXm1 are zeros of p_. The zeros of q+ are in the set {Y~""'Y~2' B1, ... ,BR3}, and B~, ... ,B~3 are zeros of q+. We also have degp_ + degq+ = i = m1 + m2 + m3 = n1 + n2 + n3· Hence one of the fo 11 owi ng two cases wi 11 occur: (i) m1 s: n1 s: m1 +m2 or (ii) n1 s: m1 s: n1 +n2. Using a combinatorial argument, we get that the total number of canonical pseudo-f-spectral factorizations W(A) W_(A)W+(A) such that W_(oo) = W+(oo) = 1 is equal to

( i ) min (n2,m1+m2-n1) (n)( m )

k:O k2 k+n~ -m1

( i i )

2. PSEUDO-f-SPECTRAL SUBSPACES Let A : X ~ X be a linear operator acting on a finite dimensional

linear space X, and l~t f be a Cauchy contour on the Riemann sphere ¢ u {oo}. We call a subspace L of X a pseudo-f-spedtral subspace if L is A-invariant, A I L has no eigenvalues in It~, the outer domain of r, and L contains all eigen:'" vectors and generalized eigenvectors corresponding to the eigenvalues of A in

Page 144: Constructive Methods of Wiener-Hopf Factorization

134 Roozemond

~;, the inner domain of r. Denote the spectral projection corresponding to the eigenvalues of A in ~; (resp. in ~~, on r) by P+ (resp, P_,PO)' Then L is a pseudo-r-spectral subspace if L is A-invariant and ImP+!:; L!:; ImP+ (jl ImPO' In other words, L is a pseudo-r-spectral subspace if and only if L = 1m P+ (jl KO' where KO is an A-invariant subspace of 1m PO' It is clear that a pseudo-r­spectral subspace always exists, e.g., the spaces 1m P + and Ker P _ = 1m P + (jl 1m Po are pseudo-r-spectra 1 subspaces. There ex is ts only one pseudo-r­spectral subspace if and only if 1m Po = (0), i.e., the operator A has no eigenvalue on r. In fact we have

PROPOSITION 2.1. Let A be. a n x n mOvVUx, al1d r a contOUlt 011 the.

Rte.mal1l1 ;.,pheJ1.e. ¢ U {<Xl}. OI1e. 06 the. 60Uow.{l1g CMU w{U OCCUlt:

(a) tlteJ1.e..{f.J e.xacil..tj 011e. p;.,wdo-r-;.,pe.c..tJta1. ;.,ub;.,pace.,

(b) theJ1.e. Me. 6.{J1de1.tj mal1tj d..t6 neJ1.e.nt puudo- r -;., pe.c..tJta1. f.JUb;.,pacu,

(c) theJ1.e..(;., a COl1Ul1uum 06 pM.udo- r-;.,pe.c..tJta1. ;.,ub;.,pacu.

The. CM e. (a) oCCuM .(6 A ItM 110 uge.l1va1.uu 011 r. The. CM e. (b) occtiJL6 .{6 the.

uge.l1va1.uu 06 A 011 r do 110t have. mo~e. thal1 011e. Jo~dal1 cha..tl1. The. CMe. (c)

OCCuM .{6 A hM al1 uge.l1va1.ue. wah ge.om~c muLUpueiltj g~e.a.teJ1. thal1 011e..

PROOF. Suppose A has an eigenvalue AO E r with geometric multi­plicity at least two. Take two linearly independent vectors x1,x2 E ¢n such

that AX 1 = AOXl' AX 2 = AOX2· For a E ¢ define La = 1m P + (jl span {ax1 + (l-a)x2}. The subspaces La' a E R, are pseudo-r-spectra1 subspaces. Furthermore, La f LS whenever a f S.

Suppose all eigenvalues of A on r have geometric multiplicity one. Denote the eigenvalues on A on r by Al, ... ,Ar , and their algebraic multi­plicities by al, ... ,ar . A simple combinatorial argument gives us that there

r r are II a· different A-invariant subspaces of 1m PO' Hence there are II a.

j=l J j=l J different pseudo-r-spectra1 subspaces.

Suppose A has no eigenvalues on r. Since 1m P+ = Ker P_, we have that 1m P+ is the only pseudo-r-spectral subspace. 0

We shall describe the changes of pseudo-r-spectra1 subspaces under some elementary operations. First, consider the operation of similarity. Suppose A = SAS-1, where S : X ~ X is some invertible linear operator. Then L is a pseudo-r-spectral subspace of A is and only if SL is a pseudo-r-spectral subspace of A.

Next, assume that A : X ~ X is a diLatiOI1 of A. This means that X admits a direct sum decomposition, X = Xl (jl X (jl X2, such that relative to this

Page 145: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral factorization 135

decomposition A admits the follov/ing partitioning

An AlO ~

(2.1) A = o A

o 0

PROPOSITION 2.2. Let A be the ditatio~ 06 A g~ve~ by (2.1), a~d a.6.6Ume that An Md A22 have ~o uge~valu.v.. o~ r. Let P + a~d Po be the .6pec.­~ p~oje~o~ 06 A c.o~v..po~~~g to the uge~valu.v.. ~~~de r a~d o~ r, ~v..pe~vety, a~d tet P+ a~d Po be the a~alogou..6 p~oje~o~ 60~ A. The~ the map

~ ~ ~

(2.2) 1m P + e KO 1+ 1m P + e P OKO'

W-i.th KO a~ A-~~v~a~ .6u.b.6pac.e 06 ImPO' deM~e.6 a o~e to o~e c.o~v..po~de~c.e betwee~ the p.6eu.do-r-.6pe~ .6u.b.6pac.v.. 06 A a~d tho.6e 06 A.

PROOF. It suffices to show that the map KO 1+ POKO defines a one to one correspondence between the A-invariant subspaces of ImPo and the A-in­variant subspaces of ImPO. With respect to the decomposition X = Xl eX e X2, Po admits the partitioning

o VI V2] (2.3) Po = 0 Po V3 .

o 0 0

Suppose Ko is an A-invariant subspace of 1m PO· We have APOKO = POAKO. Using (2.1), we compute that for each x E Ko

AlOX 1 VIAx 1 r 0 1 APox = POAx = Po Ax = PoAx = Po Ax .

o J 0 l 0

Thus APOKO = POAKO = POAKO c PoKo. Hence PoKo is an A-invariant subspace of 1m PO.

Let ~ be a subspace of 1m Po}ote that [xl'x,x2]T E ~ implies that x2 = 0, x E 1m Po and Xl = V1x. Define KO to be the subspace of X consisting

T f'OoJ rtJ rtJ

of all vectors x such that [V1x,x,O] E KO. Obviously KO c ImPO and POKO=KO. Further, one checks easily that Ko is A-invariant whenever KO is A-invariant.

o 3. DESCRIPTION OF ALL CANONICAL PSEUDO-r-SPECTRAL FACTORIZATIONS To describe the canonical pseudo-r-spectral factorizations of a

Page 146: Constructive Methods of Wiener-Hopf Factorization

136 Roozemond

rational matrix function W(A) we realize W(A) as the transfer function of a finite dimensional node and next we use the geometric factorization theorem of [1]. But first let us recall the terminology we need from [1].

A 6~nite dime~~ona! node is a quintet 6 = (A,B,C;X,Y) of two finite dimensional complex linear spaces X and Y and three linear operators A: X ->- X, B : Y ->- X and C : X ->- Y. The space Xis call ed the ~:tate ~pac.e of the node 6, and the operator A is called the main ope~ato~ of the node 8. In what follows we consider only finite dimensional nodes and therefore the words "finite dimensional" will be omitted.

The ~a~6e~ 6un~on of a node 8 = (A,B,C;X,Y) is defined to be the operator function W(A) = I + C(A - Ar1B. Here I stands for the identity operator on the (finite dimensional) linear space Y. Note that W(A) is a rational function of which the poles are contained in the set of eigenvalues of A and W(oo) = 1. If W is a rational operator function, we call a node 8 = (A,B,C;X,y) a ~eaLization of W if W(A) = I + C(A - Ar1B. Each rational function W whose values are operators on Y such that W(oo) = I appears as the tranfer function of a node and thus has a realization.

A node 8 = (A,B,C;X,Y) is called a ~n~ma! ~eaLization of W if 6 is a realization and among all realizations of W the state space dimension of is as small as possible. A node 8 = (A,B,C;X,Y) is called a m~~a! node if n-l . n-l' .n Ker CAJ = (0), .vo ImAJB = X. Here n = dimX. A node is minimal if and ~~~y if it is a mini~~l realization of its transfer function.

Two nodes 81 = (A1,B1,C1;X1,Y) and 62 = (A2,B2,C2;X 2,Y) are called ~~ if there exists an invertible linear operator S : X2 ->- Xl' called a node ~~mil~y, such that

-1 -1 Al = S A2S, Bl = S B2, C1 = C2S.

Note that similar nodes have the same transfer function. For minimal nodes there is a converse, namely, two minimal nodes with the same transfer function are similar.

A node 8 = (A,B,C;X,y) is called a cU1.ation of a node '8 = (](,B'c;X,y) if there exists a decomposition X = Xl $ X $ X2 such that relative to this decomposition the operators A, Band C have the following partitioning

(3.1) A = B =

Page 147: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral factorization 137

Note that 8 and its dilation 8 have the same transfer function. Every reali­zation of a rational operator function is a dilation of a minimal realization, and hence two nodes are realizations of the same function if and only if they are dilations of similar (minimal) nodes.

Let 8 = (A,S,C;X,Y) be a node, and \0 E ~. The node 8 will be called mLnirnCLt at :the poLn;(: \0 if

n-l . n Ker CAJp = Ker P,

j=O

n-l . V 1m PAJS = 1m P.

j=O

Here n = dimX, and P denotes the spectral or Riesz projection of A corres­ponding to the point \0. If \0 is not an eigenvalue of A, then 8 is auto­matically minimal at \0. Further, 8 is a minimal node if and only if 8 is minimal at each eigenvalue of A (or, equivalently, at each \0 E ¢).

Let 8 = (A,S,C;X,Y) be a node, and let r be a Cauchy contour on the Riemann sphere. We call a pair (L,Lx) of subspaces of X a pain 06 p~eudo-r­~pectnCLt ~ub~pa~~ for 8 if L is a pseudo-r-spectral subspace for A, and LX is a pseudo-(-r)-spectral subspace for AX. Here -r denotes the Cauchy contour which coincides with r as a set, but has the opposite orientation. We call a pair (Ml'M2) of subspaces of a linear space X mat~Mng if Ml e r42 = X, i.e., Ml n M2 = (0) and /.11 +M2 = X.

THEOREM 3.1. Lex W be :the ;(:Jta~ 6Vl 6unc.tion 06 :the node 8 = (A,8,C;X,y), and M~ume :that 8,u., mLnimCLt at ea~h po-<'n;(: 06 :the Cau~hy ~0n;(:0U!L r. Then :theAe ,u., a one:to one ~oM~ponden('_e bexween :the .!Ugh;(: eanonieCLt p~eudo-r-~pe~CLt ~ub~pa~~ 06 Wand :the mat~Mng p~ 06 p~eudo­r-~pe~CLt ~ub~pa~~ 60n 8 -<.n the 60UowLng M~e: (a) G-<.ven a mat~Mng pain (L,Lx ) 06 p~eudo-r-~pectnCLt ~ub~pa~~ 60n e, a

eanonieCLt p~eudo-r-~pe~CLt 6a~0.!Uzat-<-on W(\) = W_(\)W+(\) ,u., obta-<-ned by talUng

(3.2) Wj\) I+C(\-Af1(I-11)S,

W+(\) 1+ C11(\ - Af1s,

wheAe 11 ,u., :the pnojec.tion CLtong L on;(:o LX. (b) G-<.ven a ~anoni~CLt p~eudo-r-~pectnCLt 6a~0.!Uzat-<-on W(\) = \~-'\)W+(\),

w-<-:th W_(oo) = W+(oo) = I, :theAe e~~ a un.-Lque mat~Mng pain (L,Lx ) 06 p~eudo-r-~pectnCLt ~ub~pa~~ 60n e ~u~h :that W_ and W+ ane g-<.ven by (3.2).

PROOF. The proof consists of three steps. Firstly, we prove (a). Secondly, we prove (b) for the case where e is minimal. Thirdly, we prove (b)

Page 148: Constructive Methods of Wiener-Hopf Factorization

138 Roozemond

in the general case, where 6 is minimal at each A E r. (i) Suppose (L,Lx) is a matching pair of pseudo-r-spectral subspaces

for 6. Denote the projection along L onto LX by n. Define W_ and W+ by (3.2). Using that 1m n is AX-invariant, and Ker n is A-invariant, we get AXn = nAxn and A(I-n) = (I-n)A(I-n). Hence, like in [1], we compute that

( 1 (A) I - C (I - n)( A - Ax,-1 B, (3.3)

W~I(A) 1- C(A - Ax,-lnB .

Furthermore, we have W(A) = WjA)W+(A). Since (I-n)A(I-n) and (I-n)Ax(I-n) have no eigenvalues in n~, the function W_ is holomorphic and invertible on n~. Since nAn and nAxn have no eigenvalues in n;, the function W+ is holomorphic and invertible on n;. Note that 6 is the product of 6_ and 6+, where

6 = ((I-n)A(I-n),(I-n)B,C(I-n);Kern, V),

6+ = (nAn, nB, cn, 1m n, Y).

Further, the transfer functions of 6, 6_ and 6+ are the functions W, W_ and W+, respectively. From [1, Theorem 4.2] and the minimality of 6 at each AO E r we conclude that O(W;AO) = O(W_;AO) + O(W+;AO) at each AO E r. Hence the factorization W(A) = W_(A)W+(A) is minimal at each AO E r.

(ii) Suppose 6 is a minimal realization of W, and let W(A) =

WjA)W+(A) be a canonical pseudo-r-spectral factorization \'/ith Wjco) = W+(co) = I. Using Theorem 4.8 from [1], we conclude that there exists a unique decompo­sition X = L $ LX of X into the direct sum of an A-invariant subspace Land an AX-invariant subspace LX having the following property: If n denotes the projection along L onto LX then the functions W_,W+ and W:1,W:1 are given by (3.2) and (3.3), respectively. Since W_ is holomorphic and invertible on n~, the minimality of 6 implies that (I - n)A(I - n) and (I - n)Ax(I - n) do not have eigenvalues in n~. Hence L \; 1m P + $ 1m Po and 1m P~ \; LX. Similarly, since W+ is holomorphic and invertible on n;, we have that nAn and nAxn do not have eigenvalues in n;. Hence ImP+ I:; L and LX I:; ImP~ $ ImP~. Thus (L,Lx) is a matching pair of pseudo-r-spectral subspaces for 6.

(iii) Suppose 6 is a realization W which is minimal at each A E r, and let W(A) = W_(A)W+(A) be a canonical pseudo-r-spectral factorization with W_(co) = W+(co) = I. Th~ node e is a dilation of a minimal node e = (A,B,C;X,Y). By the previous paragraph we know that there exists a unique projection IT : X .... X such that (Kerf!, Imn) is a pair of pseudo-r-spectral subspaces and

Page 149: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral iactorization

W_ and W+ are given by

WjA) I+C(A-Af1(I-n)S,

W+(A) = I+Crr(A-Af ls.

139

Let the partitioning of A, Band C be given by (3.1). In particular A and AX have the following form:

Al AlO A12] Al Afo Af2 (3.4) A = 0 A A02 ' AX = 0 Ax A~2

0 0 A2 0 0 A2

Since e is minimal and e is minimal at each A E r the operators Al and A2 do not have eigenvalues on r. Using Proposition 2.2, we can associate with Kern a subspace L of X and with Imn a subspace LX of X such that (L,Lx) is a pair of pseudo-r-spectral subspaces for e. We shall prove that L $ LX = X and W_ and W+ are given by (3.2), where IT is the projection along L onto LX.

Using (3.4), we have the following partitionings for the spectral projections:

PI Rl R2 I - PI X -R1 X -R2

~ pX = ~ x P = 0 P+ R3 , 0 -R3 + 0 0 P2 0 0 1- P2

0 VI V2 0 VX 1 VX

2 ~ x px VX Po = 0 Po V3 , Po = 0 0 3

0 0 0 0 0 0

Since P~ = P+ and (px)2 = pX we have - -PI RI + RIP+ = Rl'

(3.5) .p 1 R2 + RI R3 + R2P 2 = R2,

P+R3 + R3P2 = R3,

and x X~ x (I - PI) RI + RI P _ = RI ,

(3.6) x x x x (I-PI)R2-RIR3+R2(I-P2)

x = R2, ~ x x x P _ R3 + R3 (I - P 2) = R3·

Page 150: Constructive Methods of Wiener-Hopf Factorization

140 Roozemond

We define operators 5l,Tl,T~ : X + Xl; 52,T2T~ X2 + Xl and x ~

53 ,T3 ,T3 : X2 + X as follows: ~ x';;')(

51 = (1 - PI) Rl P + - PI Rl P _, x 52 = (1 - Pl )R2P2 - Pl R2(1 - P2),

~ ';;')( x 53 = (1 - P +) R3 P 2 - (1 - P _ ) R3 (1 - P 2) ,

x~~ T 1 = - Rl P _ P + - Rl ,

T2 = Rl (I-P+)Rl2 -R2,

T 3 = -R3, x ~ ~x x

Tl = RlP+P_ +Rl' x x ';;')( x x

T 2 = Rl (I - P J R3 ( 1 - P 2) + R2, x x

T3 = R3· • X 1"0.1 1"0.1

Deflne 5,T,T : Xl eX $ X2 + Xl $ X $ X2 by

S ~ [ :

51 52 P Tl T2

1 53 T = 0 1 T3

0 1 0 0 1

, TX = 0

0

TX 1 T~ I

T~ I 0 1 J

The operators X, T and TX are invertible, and from (3.5) and (3.6) we get

Hence

o

o

1m PI

1m P + 5 1m P+

1m P2

r 1 - PI

5 0

o

Ker PI

1m pX = 5 1m ~

Ker P2 ~ ?! ~ ~ ~X~X ~

We wri te Ker IT = 1m t' + $ KO and 1m IT = 1m P _ $ KO with KO c

Then o VI 0

o 0

000

o

Page 151: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral factorization 141

Ker PI ( 0 Vx 0 o ") 10

1 x X x':':X

1m Px 0 t< L = 1m P _ 6) P OKO = S 6)

lo 0

Ker P2 0 0 0

Hence

1m PI ( 0 VI - Sl 0 0

10 S-lL = 1m P+ 6) I 0 Ko ,

l 0 l 1m P 2 0 0 0 (3.7)

Ker PI I 0 VX _ SX 0 r 0 1 1 S-l Lx = 1m Px I 6) 0 0 IJ<X

- I l 00 Ker P 2 J 0 0 0

From these identities we conclude

Xl Xl I S-l(L+Lx ) Ker IT + 1m IT xl = X,

X2 J X2 and

x o P1V1+(I-P1)V1-S1

o

o 0

01 r 0 1 o Ker IT n 1m IT J = (0).

o l 0

Hence L 6) LX = X. Denote the projection along L onto LX by IT. We have, by

(3.7), I - PI *

o IT

o and hence

(3.8) IT = o : 1 o 01-P2 J

From this we get CIl(A - Af1B = CIT(A - fiy1'B and C(A - Afl(1 - Il)B = ~ ~ -1 ~ ~ C(A - A) (I - Il)B. Hence the factors W_ and H+ are given by (3.2).

Page 152: Constructive Methods of Wiener-Hopf Factorization

142 Roozemond

Finally, let us show the unicity of the pair of pseudo-r-spectra1 subspaces. Assume ([,ex) is a second pair of pseudo-r-spectral subspaces, which yields the same factorization W(A) = W_(A)W+(A). Using Proposition 2.2,

A AX A AX ~

we associate with (L,L ) a pair (LO,LO) of pseudo-r-spectral subspaces for 6. A AX ~ A AX

By the same reasoning as before we have LO $ LO = X, and the pair (LO,LO) yields the same factorization W(A) = W_(A)W+(A). By the minima1ity of e we

A ~ AX ~ A AX have that LO = Ker II and LO = 1m II. Hence L = Ker II, L = 1m II. 0

EXAf.1PLE 3.2. (see Example 1.1) Let W be the rational 2 x 2 matrix function (1.3). We can write W(A) = 1+ C(A - AflB, where

- [-2i 0] _ [-i i] A - ,B - , o i 0 i

2 2 Note that 6 = (A,B,C;¢ ,¢ ) is minimal. Furthermore

AX • [: :].

Take r = R. We then have

X [1 0 1 Po = 0 1 .

Hence the matching pairs of pseudo-R-spectra1 subspaces are given by (span {~)}, span {x}), where X E ¢2 and (D are 1 inear1y independent. Without loss of generality we may assume X = X = (1 ). We shall compute the corres­ponding canonical pseudo-R-spectra1 f~ctoriazation W(A) = w~a)(A)Wla)(A). Denote the projection along span {~)} onto span {xa} by IIa' Then

IIa = [1 0], -a 0

W~a)(A) = 1+ C(A - Afl(I - rr)B =

= (6 n + (6 De~2i A~ir\~ ~)(-Oi D =

A - i(l+a) A-i

-ia A-i

i(l+a) -x:r-Hia A-i

Page 153: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral factorization

and

W 1 a. ) ( A) = 1 + crr (A - A,-I B =

= (6 ~) + (6 DUa. ~)(A~2i A~itI(-d D =

i (22~) A+ 1

Hi (2-a.) H2i

143

Hence the factorizations described in Example 1.1 were all possible canonical pseudo- R-spectra 1 factori zati ons.

EXAMPLE 3.3. The scalar case (cf. Example 1.2). Let W be a rational (scalar) function with W(oo) = 1. We can write

f. f.-I A +bf._IA + ... +bIA+bO W(A) = l I-I

A +af._IA + ... +aIA+aO

for certain complex numbers aO, ... ,af.-I,bO, ... ,bf.-I. We assume that the poly-() f. f.-I b b d () f. f.-I nomials p A =A +bf._IA + ... + 1 + 0 an q A =~, +a.e._IA + ... +aI +aO

do not have common zeros. Put (see [2, Section 1.6])

0 1 0

A = B = 0 1 0

-an ...... -a.e.-I 1

Here cJ' = b, - a., j = 0,1, ... ,f.-I. We compute J J

o 1

AX = A - BC =

o 1

-bO ...... -bf._1

C = (cO···· c.e.-I) .

The minimal node 8 = (A,B,C;¢f.,¢) is a realization of W. (The minimality follows from the assumption that p(A) and q(A) do not have common zeros.) The matrices A and AX only have eigenvalues with geometric multiplicity one. The algebraic multiplicity of an eigenvalue ~ of A (resp. AX) is equal to the

Page 154: Constructive Methods of Wiener-Hopf Factorization

144 Roozemond

multiplicity of ~ as a pole (resp. zero) of W. Suppose ~ is an eigenvalue of A (resp. AX). with algebraic multiplicity m. A basis of the spectral subspace

of A (resp. AX) at ~ is given by {~j(~)}j=o' with

1 o. = 1 •...• j.

~j(~)i = ( i-I) i-1-j i-1-j].l • = j+1 •... • i.

j = 0.1 •...• m-1.

The vectors ~O(~) •... '~m-1(~) are a Jordan-chain of A (resp. AX) corres­ponding with the eigenvalue ~.

It is well-known that. whenever ~l' ... '~t are distinct complex num­bers and sl •...• St are nonnegative integers such that Lj=l Sj ~ l (the di­mension of the state space). the vectors ~j(~k)' j=0 •...• sk_1. k=l •...• t are linearly independent.

A direct consequence of this is the following. If N is A-invariant. and NX is AX-invariant. then dim N + dim NX = l if and only if N 1& NX = ¢l.

Hence there exists a matching pair of pseudo-r-spectral subspaces if and only if there eixsts a pair (L.Lx) of pseudo-r-spectral subspaces such that dimL + dimLx = L We denote by mI' m2 and m3 (resp. n1• n2 and n3) the sum of the algebraic multiplicities in n; (the inner domain of f). on f. and in n~ (the outer domain of f). There exists a pair (L.Lx) of pseudo-r-spectral subspaces such that dim L + dim LX = i if and only if there exist nonnegative integers P.q such that

jP+q=i (3.9) m1 ~ p ~ m1 +m2

n3 ~ q ~ n2 + n3·

Using m1 +m2+m3 = n1 +n2+n3 = i. we see that P.q satisfying (3.9) exist if and only if one of the following conditions holds: (i) m1 + m2 ~ n1 ~ m1 or (ii) n1 +n2 ~ m1 ~ n1. This is the same result as in Example 1.2.

4. NON-NEGATIVE RATIONAL MATRIX FUNCTIONS In this section we shall treat the special case of non-negative

rational matrix functions. It turns out that a non-negative rational matrix function always admits exactly one canonical pseudo-R-spectral factorization of a special syrrmetric type. ~Je shall use a number of results from [4], [9],

Page 155: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral factorization 145

[ 10 land [11].

A rational m x m matrix function W is called M!1.6-adjo-tnt if

W*(A) := W(~)* = W(A). It is called V!oV!-V!l'-gatiVl'- if 0 <: <vJ(A)x,x> for each x E ¢m, A E R (A not a pole). A non-negative matrix function is automatically self-adjoint.

THEOREM 4.1. Le;t W bl'- a V!oV!-V!e.gative. ftatioV!al m x m ma.)JUx 6uV!cUoV!

w-tth W(oo) = I. Thl'-V! W ha6 l'-xactty oV!e. ~aV!oV!-t~al p~e.udo-R-~pe.ctJtal 6actoJt-t­

zatioV! 06 thl'- typl'-

(4.1) W(A) = N*(A)N(A),

w-tth N (00) = 1. T~ 6actoJt-tzatioV! ~ obta-tV!l'-d -tv! thl'- 60Uow-tV!g way. Choou a

m-tV!-tmal ftl'-aUzatioV! 8 = (A,B,C;¢n ,¢m) 06 w. Ll'-t L (ftUp. LX) bl'- thl'- ~paV! 06 aU

ugl'-V!Vl'-ctoM aVId ge.V!e.!taUud uge.VlVe.~toM ~oMe~poV!d-tV!g to the. uge.V!value.~ 06

A (ft~p. AX) iV! the ope.V! uppeft (ftUp. lowe.ft) hal6 plaV!e. aVId 06 the 6illt hai.n 06

the. JoftdaV! ~ha-tM c.oM~poV!diV!g to fte.ai. uge.V!vai.ue.~ 06 A (fte.~p. AX). TheV! (L,Lx) ~ a mat~ltiV!g pa-tft 06 p~eudo-R-~pe.~tJtai. ~ub~pa~~ 60ft 6 aVId the. 6actoft N iV!

(2.3) ~ g-tveV! by N(A) = 1+ CIT(A-Af1S, whe.fte IT ~ the. pftoje.~tiov! ai.oV!g L onto LX.

PROOF. Let 8 = (A,B,C;¢n,¢m) be a minimal realization of W. Define sets o,ox as follows. The set 0 (resp. ox) consists of all eigenvalues of A

(resp. AX) in the open lower half plane. Hence 0 and oX do not contain pairs

of complex conjugate numbers. By [9, Corollary 4.4], there exists a unique rational mxm matrix function N such that

(i) W(A) = N*(A)N(A) is a minimal factorization, (ii) the set of non real poles of N is 0,

the set of non real ~eros of N is ox, (iii) N(oo) = 1.

In particular, W(A) = N*(A)N(A) is a canonical pseudo-R-spectral factori­zation. From [9, Theorem 4.3 and Corollary 4.4] (see also [11, Theorem 1]) it is clear that the factor N is this factorization is obtained in the way des­cribed in the second part of the theorem. Finally, if we have a canonical pseudo-R-spectral factorization (4.1), then this factorization has the properties (i), (ii) and (iii) mentioned above, and hence there is only one such factorization.

(4.2)

EXAMPLE 4.2. We consider

1. 2 W(A)=-~.

A +1

o

Page 156: Constructive Methods of Wiener-Hopf Factorization

146 Roozemond

In this case W is a non-negative rational (scalar) function. A minimal reali­zation e = (A,B,C;¢2,¢) of W is given by

Note that

We see that A has no real eigenvalues, and AX has precisely one eigenvalue, namely at 0, which has geometric multiplicity one. A corresponding Jordan chain is

According to Theorem 4.1, we have a matching cair (L,Lx) of pseudo-R-spectral subspaces, given by L = span (6)}, LX = spant:1)}. The corresponding canoni­cal pseudo-R-spectral factorization is

;\ ;\ W(;\) = ~.~.

1\-1 1\+1

Note that this is the only canonical pseudo-R-spectral factorization of W(;\) (of which the factors have the value 1 at infinity).

5. WIENER-HOPF INTEGRAL EQUATIONS OF NON-NORMAL TYPE In this section we apply the theory of canonical pseudo-r-spectral

factorization to Wiener-Hopf integral equations of non-normal type. Consider the Wiener-Hopf integral equation

( 5. 1 ) lP( t) - I; k ( t - s )lP( s )d s = f ( t) , t ~ 0,

where k E L~xm(_oo,oo). We shall assume that the ~ymbol of (5.1),

(5.2) W(;\) = I - took(t)eiHdt, ;\ E R,

is a rational m x m matrix function. The rationality of the symbol allows us to see W as the transfer

function of a node e = (A,B,C;¢n,¢m). That is, W(;\) = I+C(;\-Af1B. Throughout this section, we assume that A has no real eigenvalues. (Note that this is equivalent to the requirement that e is minimal at each ;\ E R.) Denote by P+ the spectral projection of A corresponding to the eigenvalues of

Page 157: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral factorization 147

A in the upper half plane. Using W(A) Fourier transform of (5.2), we get

1+ C(A - A)-IB, and taking the inverse

t > 0, {

iCe-itA(I - P+)B, (5.3) k(t) = -"tA

-iCe 1 P B t < O. + '

Assume that the symbol W of the equation (5.1) is realized by the node e = (A,B,C,¢n,¢m), where A has no real eigenvalues. Suppose that W has a canonical pseudo-R-spectral factorization W(A) = W_(A)W+(A). Then there is a corresponding matching pair (L,Lx) of pseudo-R-spectral subspaces for the node e. Let II be the proj ect i on along L = 1m P + onto LX, and defi ne

(5.4) o $ s < t,

{"tAX "AX iCe -1 lIe 1 s B,

g(t,s) = "tAX " AX -iCe-1 (I - II)e1S B, 0 $ t < s.

From the proof of Theorem 3.1 it follows (cf. formulas (3.1), (3.4) and (3.8)) that the definition of g does not depend on the particular realization of W (as long as we assume that the main operator A does not have real eigenvalues). We call g the p¢eudo-~~olvent ke~nei corresponding to the canonical pseudo-R­spectral factorization W(A) = W_(A)W+(A). In case AX has no real eigenvalues, formula (5.4) gives the resolvent kernel as given in [2].

THEOREM 5.1. Let k be the k~nei 06 (5.1), and let g be the p¢eudo­~~olvent k~nei eo~~ponding to a eano~eai p¢eudo-R-¢pee~ai 6aeto~zation 06 the ¢ljI1IboL The 60Uow<.ng ~~olvent -Lde~~ hold

(5.5)

(5.6)

g(t,s) - J; k(t - u)g(u,s)du = k(t - s),

k(t - s) - J; g(t,u)k(u - s)du = g(t,s),

s ~ 0, t ~ ° s ~ 0, t ~ O.

H~e the -Lnteg~w Me c.oM-Ld~ed M imPMp~ -Lnteg~w, c.onv~9-Ln9 6M eac.h s ~ 0, t ~ 0.

PROOF. We shall prove (5.5) in case 0 $ s < t. The other case and (5.6) are left to the reader.

Assume 0 $ s < t, and take T > t. We have I6 k(t - u)g(u,s)

11+12+13' with

II = Ig iCe-i(t-U)A(I - P+)B._ice-iUAX(I - II)eiSAXBdu,

I = It iCe-i(t-u)A(I _ P )B.iCe-iUAXIIeiSAxBdu 2 s + '

13 = Ii _ice-i(t-U)Ap+B.ice-iUAXIIeiSAXBdU.

Page 158: Constructive Methods of Wiener-Hopf Factorization

148

We rewrite

II = _iCe-itA f~ (I_p+)d~[eiUAe-iUAX](I_IT)eiSAXBdU

= -iCe-itA(I _ P+)eiSAe-iSAX(I _ IT)eiSAXB+

+iCe-itA(I - P+)(I _ IT)eisAXB

= _ iCe - itA (I _ P +) e i sA e - i sAx (I _ IT) e i sAx B ,

using that (I-P+)(I-IT) = 0. Similarity, we get

12 = iCe-itA(I _ P+)[ eitAe-itAx _ eiSAe-iSAX]ITeiSAXB,

13 = ice-itAp+[ eitAe-itAx _ eiTAe-iTAX]ITeiSAXB.

Hence

Roozemond

II + 12 + 13 = - k( t - s) + g (s, t) - iCe -itAp + e iTAe -iTAx ITe i sAB.

°TA - ° TAx °TA Weoha~e limT~ P+e1 e 1 IT = 0, since P+e1 decreases exponentially, and

e- 1TA IT grows polynomially for T 7 00 0 Thus (5.5) has been proven in case

° ~ s < t. 0 By K and G we denote the integral operators wi th kernels k( t - s)

and g(t,s), given by (5.3) and (5.4), respectivelyo In case W(A) is invertible

for each A E R, the operator 1- K : L;[O,oo) 7 L;rO,oo) is invertible with

inverse I + Go Although the resolvent identities hold, this inversion result is

not true in case detW(A) has zeros on the real line. In fact, it may happen

that (I-K)lPO = 0 for some function lPO' while [(I+G)(I-K)JlPO = lPO' EXAMPLE 5.2. (see Example 4.2) Consider the kernel k(t) = ~e-Itl,

t E R. The symbol is given by W(A) = A2/(A 2+1). We take the following minimal

realization of W: 8 = (A,B,C;¢2,¢) with A. Band C given by (4.3). In that

case AX is given by (4.4). There is only one matching pair of pseudo-R-spect

spectral sUb~paces, namely 1 span {(6)} , span {(!1)}' If we denote the projection

along span {(o)} onto span {-I)} by IT. we have

(5.7) IT = (~ -l). Furthermore, we have

eitAx = exp {it [1 ,..i) [0 I) [1 -i)-I} -1 -1 0 0 -1 -1

Page 159: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral factorization

= [ 1 -i] [0 i t] [~ -~] -1 -i exp 0 0 ~i ~i

The pseudo-resolvent kernel is given by "tAX "Ax iCe -1 IIe 1 s B = 1 + s, 0 $ s < t,

g(t,s) ={ X X - i Ce - itA (I + II) e i sA B = 1 + t, 0 $ t < s.

Since the symbol has zeros on the real line, it is clear that

149

1- K : Lp[O,oo) -+ Lp[O,oo) cannot be invertible with inverse 1+ G. In fact, the integral operator G is not defined for each f E Lp[O,oo) and for tpO(t) = 1 + t, t ~ 0, the function (I - K)tpO = 0, while from the resolvent identities one expects [(I+G)(I-K)]tpO = tpO.

6. PAIRS OF FUNCTION SPACES OF UNIQUE SOLVABILITY To deal with the phenomenon described at the end of the previous

section we shall define function spaces of unique solvability. As before, we assume the symbol of (5.1) to be rational. Let K be

the integral operator with kernel k(t-s). Assume 9 to be the pseudo-resolvent kernel corresponding to a canonical pseudo-R-spectral factorization of the symbol. Let G be the integral operator with kernel g(t,s). A pair of function spaces (L,Lx). where L c LT,loc[Q,oo) and LX c LT,lOC[O,oo), is called a p~ 06 6unction ~paee6 06 unique ~olvabitity (for (5.1) corresponding to the pseudo-resolvent kernel g) if 1- K : L -+ LX and 1+ G : LX -+ L are defined and invertible, and (I - K,-1 = I +G. More precisely, this means the following: (a) for tp ELand f E LX the integrals I; k(t-s) (s)ds and I; g(t,s)f(s)ds exist as improper integrals for each t ~ 0, (S) for each tp ELand each f E LX we have (I-K)tp E LX and (I+G)f E L,

(y) for each tp ELand each f E LX we have (I + G)(I - K)tp = tp and (I-K)(I+G)f = f. We shall construct (maximal) pairs (L,Lx) in terms of a realization of the symbol of (5.1).

THEOREM 6.1. Let 6 = (A,B,C;¢n,¢m) be a ~ealization 06 the ~ymbol 06 (5.1) ~ueh tha:t A hM no ~eal eigenvalue6, and let (L,Lx) be a ma:tdung

p~ 06 puudo-R-~pec..tJta1. ~ub~pac.e6 6M 6. VeYlOte by P + the ~pec1Jr.al p~o-

Page 160: Constructive Methods of Wiener-Hopf Factorization

150 Roozemond

jeaton. 06 A C.OJULU,POHcUn.g :to :the ugen.valrlu, 06 A in. :the uppe.Jt hal6 plan.(' ,

an.d let IT be :the pltO j eaton. along L onto LX. Ve6in.e 6un.c.tion. f..pac.u, x m Le,Le eLl. 10c[0,00) by

(b) ~ E Le i6 an.d only i6 t . A

(i) lim f P+elS &P(s)ds e.x-u,u, t->"" 0

(ii) lim (I+IT)eitAXe-itA[f (I_P+)eiSAB~(S)dS - 'J p+eiSAB~(S)dS] = 0, t~ 0 t

(b) f E L~ i6 an.d only i6

( i ) 1 im f (I -IT) i sAx Bf ( s )d s e~u,u, t~o

'tA 'tAxr t . AX 00 . AX ] (ii) lim P+el e- l If ITe lS Bf(s)ds-f (I_IT)elS Bf(s)ds = O.

t~ LO t Then. (Le,L~) if.. a pailt 06 6un.aton. f..pac.u, 06 un.ique f..olvability. 76 8 if.. a

m.tnimal /teaUzation., :then. (Le' L~) if.. maximal with Itu,pec.:t :to in.c1Uf..ion..

The maximality in the last part of the theorem has to be understood in the following way. For each pair (L,Lx) of function spaces of unique sol­vability we have L c Le and LX c L~ whenever 8 is minimal. The proof of Theorem 6.1 will be based on three lemmas for k and equivalent versions for g. In what follows it is assumed that e = (A,B,C;¢n,¢m) is a realization of the symbol of (5.1) such that A has no real eigenvalues and IT is as in Theorem 6.1.

LEI-IMA 6.2. Let k be given by (5.3) an.d {et ~ E Lml 1 [0,00). The , oc impltOPe.Jt integltal f~ k(t-s)~(s)ds exif..u 601t eac.h t ~ 0 ~6 limt~ fJ P+eiSAB~(S)dS exif..u. One may Iteplac.e "if" by "if and only if" when­eve.Jt e if.. m.tnimal.

PROOF. Take T ~ t ~ O. Then

} k(t-s)~(s)ds = J iCe-i(t-s)B~(s)ds - } iCe-i(t-s)p B~(s)ds. o 0 0 +

Hence limT~ fci k(t-~)~(s)ds exists if and only if limT~ c~-itA.f~ P+elSAB~(S)dS exists. This is the case whenever limT~ fO P+e lS B~(s)ds exists.

Suppose e is minimal. Define Cn-1[O,00) to be the (n-l)-times continuously differentiable ¢m-valued functions, whose derivatives, up to the order (n-l), are bounded. Define a norm 111·111 on Cn-1[0,00) by lllfill =

r.~=61If(j)11 . With this norm, Cn-l[O,oo) is a normed linear space. Define J- oon_l -itA

A: 1m P+ -+ C [0,00) by Ax = Ce x, t ~ O. Clearly, A is continuous. Since e is minimal, we have

Page 161: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral factorization

Q =

A

CA

CAn- 1

is left invertible. Denote the left inverse by n+. Define A+ : Cn-1 [O,oo} -+ 1m P + by

f(O) if' (0)

i n-1f(n-l)(0)

151

We have A+. is contin~ous, and A+Ax = x for each x E 1m P +. Hence~ from limT-+oo Ce-1tA fci P+elSAB~s)dS exists we conclude limT-+oo fci P+elSAB~(S)dS exists. 0

For 9 we have the following analogue of Lemma 6.2 (the proof is the same and is omitted).

LEMMA 6.2'. Let 9 be given by (5.4) and i.et f E L~ 10c[0,00). The imp~op~ integ~al. Jo g(t,s)f(d)ds ex~~ 6o~ ea~h t ~ 0 i6 ' limt-+oo fJ (I-II)eisAxBf(s)ds ex~~. One may ~epla~e "if" by "if and only if"

whenev~ e ~ minimal.. LEMMA 6.3. Let k and 9 be given by (5.3) and (5.4), ~ehpectively,

and let ~ E L~,loc[O,oo). A~~ume that limt-+oo fJ p+eisAB~(S)dS ex~~. Then 6o~ f = (I-K)~ the imp~op~ integ~al f; g(t,s)f(s)ds e~~ 6o~ ea~h t ~ 0 i6

lim (I_II)eitAXe-itA[J (I_P+)eiSAB~(S)dS _ y p+eiSAB~(S)dS] t-+oo 0 t

ex~~. One may ~epla~e "if" by "if and only if" wheneve/L e ~ minimal. PROOF. We compute for t ~ 0 the integral fJ eisAXBf(s)ds =

11 + 12+ 13,where

t i sA 11 = f e B~(s)ds,

o t . AX . AS. A

I = f e1S B·-iCe- 1S f e1r B~(r)drds, 200

t . AX . A 13 = f e1S B·iCe-1S xds.

o

Page 162: Constructive Methods of Wiener-Hopf Factorization

152

Here x = f; p+eiSAB~(S)ds. We rewrite

12 = l c&[eiSAXe-iSA] ~ eirAB~(r)drds =

. tAX 'tA t . At. AX = e1 e-1 f e1S B~s)ds - f e1S B~(s)ds.

o 0

Similarly, we get

13 = _eitAxe-itAx+x.

Hence

(6.1) l eiSAXBf(s)ds = eitAXe-itA[l eisAB (S)dS-X] + x.

Roozemond

We conclude t~atxli~t~ f5 {1_IT)eiSAXBf(s)ds exists if and only if limt~ (1_II)e1tA e-1tA[f5 e1sAB (s)ds - X] exists. We rewrite this to

lim (I_II)eitA~e-itA[f (1-P+)eiSAB~(S)dS - f p+eiSAB~S)dS] t~ 0 t

exists. Using Lemma 6.2', we prove the lemma. D The analogue of Lemma 6.3 is LEMMA 6.3'. Let k and g be given by (5.3) and (5.4), ~~peetivety,

and iet f E L~,lOC[O,oo). A~~ume that limt~ fg (1_IT)eisAXBf(S)dS e~~.Then 6M ~ = (1+G)f the imp~op~ imegMi f; k(t-s)~(s)ds ex~~ 6M eac.h t ~ 0 i6

'tA 'tAX[t . AX 00 • AX ] lim P+el e- l f IIe lS Bf(s)ds - f (1_IT)e1S Bf(s)ds t~ 0 t

e~u. One may ~epiac.e "if" by "if and only if" whenev~ 8 ~ minimal. LEMMA 6.4. Let k and g be given by (5.3) and (5.4), ~~pec.tivety,

and iet ~ E L~,loc[O,oo). A~~ume that x = limt~ f6 p+eiSAB~(S)dS ex~~. ~~ume that 6o~ f = (1-K)~ the limit y = limt~ fJ (1-IT)eisAxBf(s)ds ex~u. Then (1+G)(1-K)~ = ~ i6 y = x, o~ equivaiently,

lim (I_IT)eitAXe-itA[f (1-P+)eiSAB~(S)dS - J p+eiSAB~(S)dS] = O. t~ 0 t

One may ~epiac.e "if" by "if and only if" whenev~ e ~ minimai. PROOF. Using (6.1), we write

(Gf)(t) = ice-itAX[l eiSAXBf(S)dS_y] =

= ice-itA[l eiSAB~(S)dS-X] + iCe-itAx(x_y)

Page 163: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral factorization 153

= (K(j))(t) + iCe-itAX(x_y), t 2 O.

Since (I+G)( I-K)(j) = (j)+ GF - K(j), and

(6.2) (Gf)(t) - (K(j))(t) = iCe-itAX(x-y), t 20,

we have (I+G)(I-K)(j) = (j) if and only if iCe-itAx(x_y) = O. The latter equality holds when x = y. When e is minimal, iCe-itAx(x_y) = 0 for each t 2 0 if and

only if x = y. We use (6.1) to rewrite

t . AX y = lim f (I-rr)e's Bf(s)ds =

t-- 0

= l.! (I-rr)eitAXe-itA[l (I-P+)eiSAB(j)(S)dS - J p+eiSAB(j)(S)dS] + x.

From this we get the one but last statement in the theorem. LEMMA 6.4'. Let f a~d g be give~ by (5.3) a~d (5.4), ~e~peetively,

a~d let f E Lm1 1 [0,00). A~~ume that y = limt f6 (I-rr)eisAxBf(s)ds ex~~. , oc -- . A

AMume that non (j) = (I+G)f the lima x = limt __ f6 P+e's B(j)(s)ds ex~~. The~ (I-K)(I+G)f = f i6 x = y, O~ equivate~y,

lim P eitAe-itAX[S rreisAXBf(S)dS - J (I_rr)eiSAXBf(S)dS] = o. t__ + 0 t

O~e may ~eplac.e "if" by "if and only if" whe~ev~ e ~ miMmat. PROOF of Theorem 6.1. Suppose (j) E Le' By Lemma 6.2 we have that K(j)

is defined. From Lemma 6.3 and 6.4 we get (I-K)(j) E L~. The identity (I+G)(I-K)(j) = (j) follows from Lemma 6.4. Similarly we prove that for each f E L~ we have Gf is defined, (I+G)f E Le and (I-K)(I+G)f = f.

Next assume e to be minimal and let (L,Lx) be an arbitrary pair of function spaces of unique solvability. Take (j) E L. By Lemma 6.2 ~Ie have that limt __ f; P+eiSAB(/)(S)dS exists. By Lemmas 6.3 and 6.4 we get

lim (I_rr)eitAXe-itA[S (I-p+)eisAB(j)(S)dS - 7 p+eiSAB(j)(S)dS] = O. t-- 0 t

Hence (j) E Le' Similarly we prove LX c L~. 0 Let e and rr be as in Theorem 6.1. Without going into details we

mention a few concrete function spaces contained in Le and L~. Let aO be the length of the longest Jordan chain corresponding to a real eigenvalue of (I-rr)Ax(I-rr) (note that (I-rr)e itAX is bounded by a polynomial of degree aO - 1 for t 2 0). Then

Page 164: Constructive Methods of Wiener-Hopf Factorization

154

1 m --=--L [0,00) c La for II > llO - 1, 1 ~ P ~ 00, (t+l)ll P

1 13 Lmp[O,oo) c L~ for 13 > llO-I+ _pI, 1 ~ P ~ 00. (t+l )

In particular, -ht m e Lp[O,oo) c La for h > 0, 1 ~ P ~ 00,

e-htL~[O,oo) c L~ for h > 0, 1 ~ p ~ 00.

Roozemond

EXAMPLE 6.6. (see Examples 4.2 and 5.2) Consider the kernel k(t) = ~e-Itl, t E R. Its symbol is W(A) = A2/(A2+1). From the Examples 4.2 and 5.2 we know that there is a unique pseudo-resolvent kernel. In this case the maximal pair (L,Lx) of function spaces of unique solvability is given by (a) ~ E L if and only if ~ E L1,loc[0,00) and

(i) lim ~ot e-s~(s)ds exists, t~

(ii) i~ [I~ e-t+s~(s)ds - I; et-s~(S)dS] = 0,

(b) f E LX if and only if f E Ll 1 [0,00) and , oc (i) lim ~ot f(s)ds exists.

t~

To see this, we use the minimal realization a = (A,B,C;¢2,¢) given by (4.3). In that case AX is given by (4.4). We have a unique matching pair (L,Lx) of pseudo-R-spectral subspaces. The projection IT along L onto LX is given by (5.7). We compute

[: :] p+e iSA = [1 0] re-s 01 = [e-s 0]

o 0 t 0 e~ 0 0

(I_IT)e iSAX = [1 1] [1-~S o 0 ~s

-~s ] 1+~s =

For each ~ E L1,loc[0,00) we have

t isA [Iot e-S~(S)dS] I P+e B~s)ds = . o 0

If limt~ I~ P+eiSAB~(S)dS exists, we have

(I_IT)eitAXe-itA[l (I_P+)eisAB~(S) _ {p+eiSAB~(S)dS] =

. [IJ e-t+S~(S~dS - I; et-S~(S)dS] .

Page 165: Constructive Methods of Wiener-Hopf Factorization

Pseudo-spectral factorization

Hence ~ E La if and only if ~ E L1,loc[O,oo) and

(i) lim fot e-s~(s)ds exists, t-+oo

[ t -t+s Joo t-s ] ( i i) U~ Io e lP( s ) d s - t e lP( s ) d s = O.

Note that Lp[O,oo) c La whenever 1 $ P $ 00.

To describe L~, we compute for f E L1,loc[O,oo)

t . x [2 It f(S)dS] 6 (I-II)e,sA Bf(s)ds = 0 0 .

Suppose that limt-+oo I5 (I_II)eiSAxBf(S)ds exists. We have

'tAX[t . AX 00 • AX ] e-' 6 IIe'S Bf(s)ds - {(I-II)e'S Bf(s)ds =

= [- I5 (l+s)f(s)ds - (2+t)ooI; f(S)dS]

I5 (l+s)f(s)ds + t It f(s)ds

Hence f E L~ if and only if f E L1,loc[O,oo) and

(i) lim fot f(s)ds exists, t->oo

(ii) l~ e-t[I5 (l+s)f(s)ds + (2+t) I; f(s)ds] = O.

155

It turns out that the second condition is superfluous. For if limt-+oo I5 f(s)ds exists, then limt-+oo e- t (2+t) I; f(s)ds = 0 and limt-+oo e- t IJ f(s)ds = O. Furthermore we have

t t t s I sf(s)ds = t I f(s)ds - I 5 f(r)drds. o 000

The function F(t) = I5 f(s)ds, t ~ 0, is absolutely continuous and bounded. Hence limt->oo 55 sf(s)ds = 0, and f E L~ if and only if f E L1,loc[O,oo) and

(i) lim lot f(s)ds exists. t--

Note that L1[O,oo) c L~, but Lp[O,oo) ¢ L~ whenever 1 < P $ 00. We do have e-htLp[O,oo) c L~ whenever h > 0, 1 $ P $ 00. We conclude that the only solution in Lp[O,oo) of (I-K)lP = 0 is lP = O. The equation (I-K)lP = f is not always solvable in L (and hence in Lp[O,oo)) for f E Lp[O,oo), 1 < p $ 00.

ACKNOWLEDGEMENT Sincere thanks are due to M.A. Kaashoek and I. Gohberg for useful

suggestions, many discussions on the subject of this paper, and advic~

Page 166: Constructive Methods of Wiener-Hopf Factorization

156 Roozemond

concerning preparation of the final version of this paper.

REFERENCES

1. Bart, H, Gohberg, I., Kaashoek, M.A.: Minimal factorization of matrix and operator functions. Operator Theory: Advances and Applications. Vol. 1, Birkhauser Verlag,Basel, 1979.

2. Bart, H., Gohberg, I. Kaashoek, M.A.: 'Wiener-Hopf integral equa­tions, Toeplitz matrices and linear systems.' In: Toeplitz Centen­nial (Ed. I. Gohberg), Operator Theory: Advances and Applications, Vol. 4, Birkhauser Verlag, Basel, 1982; 85-135.

3. Bart, H., Gohberg, I., Kaashoek, M.A.: 'Wiener-Hopf factorization of analytic operator functions and realization', Wiskundig Semina­rium der Vrije Universiteit, Rapport nr. 231, Amsterdam, 1983.

4. Gohberg, I., Lancaster, P., Rodman, L.: Matrix polynomials, Academic Press, New York N.Y., 1982.

5. Gohberg, I.C., Krein, M.G.: 'Systems of integral equations on a half line with kernels depending on the difference of arguments', Uspehi Mat. Nauk 13 (1958) no. 2 (80), 3-72 (Russian) = Amer. Math. Soc. Transl. (2r-14 (1960), 217-287.

6. Gohberg, I., Krupnik, N.: EinfUhrung in die Theorie der eindimen­sionalen singularen Integraloperatoren, Birkhauser Verlag, Basel, 1979.

7. Michlin, S.G., Prossdorf, x.: Singulare Integraloperatoren. Akademie-Verlag, Berlin, 1980.

8. Prossdorf, S.: Einige Klassen singularer Gleichungen. Akademie­Verlag, Berlin, 1974.

9. Ran, A.C.M.: Minimal factorization of selfadjoint rational matrix functions. Integral Equations and Operator Theory ~ (1982), 850-869.

10. Ran, A.C.M.: Semidefinite invariant subspaces; stabilitY,and applications. Ph.D. Thesis Vrije Universiteit, Amsterdam, 1984.

11. Rodman, L.: Maximal invariant neutral subspaces and an application to the algebraic Riccati equation. Manuscripta Math. 43 (1983) 1-12.

L. Roozemond, Subfaculteit Wiskunde en Informatica, Vrije Universiteit, De Boelelaan 1081, 1081 HV Amsterdam The Netherlands.

Page 167: Constructive Methods of Wiener-Hopf Factorization

Operator Theory: Advances and Applications, Vol. 21 © 1986 Birkhiuser Verlag Basel

MINIMAL FACTORIZATION OF INTEGRAL OPERATORS AND CASCADE

DECOMPOSITIONS OF SYSTEMS

I. Gohberg and M.A. Kaashoek

157

A minimal factorization theory is developed for integral operators of the second kind with a semi-separable kernel. Explicit formulas for the factors are given. The results are natural generalizations of the minimal factorization theorems for rational matrix functions. LU- and UL-factorization appear as special cases. In the proofs connections with cascade decompositions of systems with well-posed boundary conditions play an essential role.

O. INTRODUCTION

This paper deals with integral operators T which act on the space

L2C[a,b] ,(I) of all square integrable (I-valued functions on [a,b] and which

are of the form

b CO.l) CT~)Ct) = ~Ct) + J kCt,s)~Cs)ds, a ~ t ~ b,

a

with k a semi-separable mxm matrix kernel. The latter means that k admits

a representation of the following type:

a ~ s < t ~ b, CO.2)

a ~ t < s ~ b.

Here for v = 1,2 the functions FvC·) and GvC·) are matrix functions of sizes m x nv and nv x m, respectively, which are square integrable on a ~ t ~ b. The numbers n1 and n2 may vary with T and the representation CO.2) of its kernel.

The main topic of the paper concerns factorizations T = T1T2, where T 1 ar,d '1.'2 are integral operators of the same type as T, and the factorization

T = T1T2 has certain additional minimality properties. Roughly speaking,

minimal factorization allows to factor an integral operator into a product of

simpler ones and it excludes trivial factorizations as T = T1CT~~) for

example. The concepts of minimal factorization for integral operators intro­

duced and studied in this paper is a natural generalization of the concept of

minimal factorization for rational matrix functions.

Page 168: Constructive Methods of Wiener-Hopf Factorization

158 Gohberg and Kaashoek

To explain this in more detail consider a factorization

(0.3)

where WoO,), W1(A) and W2(A) are rational mxm matrix functions. For the

factorization in (0.3) minimality means that there is no "pole-zero cancel­

lation" between the factors (see, e.g., [2], § 4.3). Let us assume that the

functions Wo(')' W1(·) and W2(·) are analytic at 00 with the value at 00 equal

to the m x m identity matrix 1m' Then ([ 2], § 2.1) we may represent these

functions in the form:

where Av is a square matrix of order lv' say, and Bv and Cv are matrices of

sizes .tv x m and m x .tv ' respectively. The equality in (0.3) is now equivalent

to the factorization TO = T1T2, ,where for v = 0,1,2 the operator Tv is the

integral operator on L2([a,b],~) defined by

t (t-s)A (Tv~)(t) = ~(t) + r Cve VBv~(S)dS, a ~ t ~ b.

In this way theorems about minimal factorization of rational matrix functions

(see [2,3]) can be seen as theorems about minimal factorization of Volterra integral operators of the second kind with semi-separable kernels of a special

type. In this paper we extend this minimal factorization theory to the

class of all integral operators of the second kind with semi-separable kernels. The extension is done in two different ways, corresponding to two different

notions of minimal realization (see [8,9,10,16]), and leading to two different

types of minimal factorization, one of which will be called SB-minimal

factorization (for reasons which will be clear later) and the other just mini­

mal factorization. For both concepts we describe all minimal factorizations

and we give explicit formulas for the factors. We also analyze the advantages

and disadvantages of the two types of minimal factorization.

Special attention is paid to LU- and UL-factorization. Such

factorizations appear here as examples of SB-minimal factorization. Our

theorems for LU- and UL-factorization are related to those of [11], §IV.8 and

[18], and they generalize the results in [13] (see also [1]) which concern the

positive definite case. We note that the class of kernels considered in the

Page 169: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization

present paper differs from the class of kernels treated in [11, 18]; for

example, in our case we allow the kernels to have discontinuities on the

diagonal.

159

The paper is based on the fact ([5], §I.4) that an integral operator

with a semi-separable kernel can be viewed as the input/output operator of a

time varying linear system with well-posed boundary conditions. For example,

the integral operator (0.1) with kernel (0.2) is the input/output operator of

the system

a $ t $ b,

a $ t $ b,

Using this connection we reduce the problem of minimal factorization of inte­

gral operators to a problem of cascade decomposition of time varying systems.

The system theory language makes more transparent the problem and also shows

better the analogy with the rational matrix case.

The paper consists of three chapters. In the first chapter we state

(without proofs) the main minimal factorization theorems. This is done for

different classes of integral operators. In Chapter I we also give a number of

illustrative examples. The proofs of the theorems are given in the third

chapter; they are based on a decomposition theory for time varying systems with

well-posed boundary conditions which we develop in the second chapter.

A few words about notation. All linear spaces appearing below are

vector spaces over ¢. The identity operator on a linear space is always de­

noted by 1. The symbol 1m is used for the m x m identity matrix. An m x n matrix

A will be identified with the linear operator from ¢n into ¢m given by the canonical action of W with respect to the standard bases in ¢n and ¢m. The

symbol XE stands for the characteristic function of a set E; thus XE(t) = 1

for tEE and XE(t) = 0 for tiE.

I. MAIN RESLE'IS

In this chapter we state (without proofs) the main results of this

paper. In the first four sections the main theorems and examples for minimal

factorization are given. The SB-minirnal factorization theorems appear in the

next three sections. The last section is devoted to LU- and UL-factorization.

Page 170: Constructive Methods of Wiener-Hopf Factorization

160 Gohberg and Kaashoek

1.1 Minimal representation and degree

Let Y be a finite dimensional iruler product space. By L2([a,b],Y)

we denote the Hilbert space of all square integrable Y-valued functions on

a ~ t ~ b. An operator T on L2([a,b],Y) will be called a (SK)-integral opera­

tor if T is an integral operator of the second kind with a semi-separable

kernel, i.e., T has the form

b (1.1 ) (T~)(t) = ~(t) + J k(t,s)~(s)ds, a ~ t ~ b,

a

and its kernel k admits the following representation:

a ~ s < t ~ b, (1.2 )

a ~ t < s ~ b.

Here F) t) : Xv .... Y and Gv (t ) y .... Xv are linear operators, the space Xv is a

finite dimensional inner product space, and as functions Fv and Gv are square

integrable on a ~ t ~ b (v = 1,2). The spaces Xl and X2 may vary with T and

the representation (1.2) of its kernel. In what follows we keep the space Y

fixed.

The kernel k of a (SK)-integral operator T may also be represented

in the form

(1. 3) 1 C(t)(I - P)B(s), k( t, s) =

-C(t)PB(s) ,

a ~ s < t ~ b,

a ~ t < s ~ b,

where B(t) : Y .... X, C(t) : X .... Y and P: X .... X are linear operators, the space

X is a finite dimensional inner product space and the functions B(·) and C(·)

are square integrable on a ~ t ~ b. Indeed, starting from (1.2) we may write

k as in (1.3) by taking X = Xl e X2 and

(1. 4)

If (1.3) holds for the kernel k of T, then we call the triple 6 = (B('),C(');P) a representation of T. The space X will be called the internal

space of the representation 6, and we shall refer to P as the internal

operator. Note that in (1.4) the operator P is a projection, but this is not a condition and we allow the internal operator to be an arbitrary operator.

Let T be an (SK)-integral operator. A representation 6 of T will be called a minimal representation of T if among all representations of T the

Page 171: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 161

dimension of the internal space of 6 is as small as possible. We define the

degY'ee of T (notation: 6('1')) to be the dimension of the internal space of a

minimal representation of T. The degree satisfies a sub logarithmic property.

1.1 LEMMA. Let T1 and T2 be (SK)-integY'al opeY'atoY's on L2([a,b],Y).

Then the pY'oduct T1T2 is a (SK)-integY'al opeY'atoY' and

1.2 Minimal factorization (1)

Let T be a (SK)-integral operator on L2([a,b],Y). We call T = T1T2

a minimal factoY'ization (of T) if T1 and T2 are (SK)-integral operators on

L2([a,b],Y) and the degree of T is the sum of the degrees of T1 and T2, i.e.,

To construct minimal factorizations of T we need the notion of a decomposing

projection.

Let 6 = (B( • ) ,C( • ) ;P) be a representation of T. Let X be the internal

space of 6. By definition the fundamental opeY'atoY' of 6 is the unique

absolutely continuous solution ~(t) : X ~ X, a ~ t ~ b, of the initial value problem:

(2.2) ~(t) = -B(t)C(t)~(t), a ~ t ~ b, ~(a) = 1.

A projection IT of the internal space X of 6 will be called a decomposing PY'O­

jection for 6 if the following conditions are satisfied:

( i ) ITPIT = ITP,

(ii) det(I - IT + ~(t)IT) t 0, a ~ t ~ b, (iii) (I-IT)PIT = P(I-IT)(I-IT+~(b)IT)-lIT(I-P).

The function ~(.) in (ii) and (iii) is the fundamental operator of 6. Con­

dition (i) means that Ker IT is invariant under P. Let us write P and ~(t) as

2 x 2 block matrices relative to the decomposition X = Ker IT Ell 1m IT:

Then the conditions (i) - (iii) are equivalent to

(i) , P 21 = 0,

(ii)' det ~22(t) t 0, a ~ t ~ b,

Page 172: Constructive Methods of Wiener-Hopf Factorization

162 Gohberg and Kaashoek

2.1 THEOREM. Let T be a (SK)-integral operator on L2([a,b],Y), and

let ~ = (B(·),C(·);P) be a minimal representation of T. If IT is a decomposing

projection for ~, then a minimal fcctorization T = T1T2 is obtained by taking

T1 and T2 to be the (SK)-integral operators on L2([a,b],Y) of which the

kernels are given by

1 C(t)(I - P)(I - IT(s) )B(s), (2.3) k1(t,s) =

-C(t)P(I - IT(s) )B(s),

a <:; s < t <:; b,

a <:; t < s <:; b,

(2.4)

respectively. Here

f C ( t ) IT ( t ) ( I - P) B( s) ,

l-C(t)IT(t)PB(s) ,

IT(t) = r2(t)IT(I-IT+ r2(t)IT)-1 IT ,

a <:; s < t <:; b,

a <:; t < s <:; b,

a <:; t <:; b,

with r2(t) being the fundamental operator of ~.

Furthermore, if ~ runs over all possible minimal representations of

T, then all minimal factorizations of T may be obtained in this way, that is,

given a minimal factorization T = T1T2, there exists a minimal representation

~ = (B(·),C(·);P) of T and a decomposing projection IT for ~ such that the

kernels of T1 and T2 are given by (2.3) and (2.4), respectively.

Let ~ = (B(·),C(·);P) be a minimal representation of the (SK)-inte­

gral operator T. We say that ~ generates the minimal factorization T = T1T2

if there exists a decomposing projection IT for ~ such that the kernels of Tl

and T2 are given by (2.3) and (2.4), respectively. In general, there is no

single minimal representation of T which generates all minimal factorizations

of T. In other words, to get all minimal factorizations of T it is, in general,

necessary to consider various minimal representations of T. This statement is

SUbstantiated by the next example.

On L2[O,1] we consider the (SK)-integral operators:

1 = ~(t) + J ~(s)ds,

o t 1

~( t) + J 1 +s ~(s )ds , o

Page 173: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization

1 1 (T2<.p)(t) = <.p( t) + J l+t <.p(s)ds, t 1 1 (T3<.p)(t) = <.p( t) - J l+s <.p(s)ds, t t

2 1 3 (T 4<.p) (t) = <.p( t) + J l+t<.p(S)dS+ J l+t <.p(s)ds.

0 t

It is straightforward to check (see also the analysis here below) that

(2.6)

Let us prove that the two factorizations in (2.6) are minimal.

First of all, note that

• ~ ([ _~] , (1 1) ; [: :]),

" ~ ( [:J (1 1) ; [: -3"])'

163

are representations of TO' The internal space of both 6 and 6* has dimension

two and it is not difficult to check (see [9], §3) that TO has no represen­

tation with an internal space of a smaller dimension. Hence both 6 and 6* are

minimal representations of TO' Next, consider on ¢2 the projection

(2.7) IT= [: ~l. We claim that IT is a decomposing projection for 6 and 6*. Obviously, condition

(i)' is satisfied for 6 and 6*. Furthermore,

(2.8) [ 1-t

r.l (t) = t -t I

1 +t J ' o " t " 1,

is the fundamental operator for both ~ and ~*. Now, it is simple to see

that conditions (ii)' and (iii)' are fulfilled for both 6 and ~*. Hence IT is decomposing for ~ and 6*.

Note that for IT given by (2.7) and r.l(t) by (2.8) the function IT(t)

in (2.5) is given by

(2.9) [0 -t(1+1t)-1],

IT(t) = 0 o " t " 1.

Page 174: Constructive Methods of Wiener-Hopf Factorization

164 Gohberg and Kaashoek

Using this one sees that Theorem 2.1 applied to ~ and IT yields the first

factorization in (2.6), and the second factorization in (2.6) is obtained by

applying Theorem 2.1 to ~* and IT. Thus both factorizations in (2.6) are mini­

mal.

The fact that two different minimal representation of TO were used

to construct the minimal factorizations in (2.6) is no coincidence. In § III. 1

we shall prove that the operator TO in the present example has no single mini­

mal representation which generates both factorizations in (2.6).

Let T be an arbitrary (SK)-integral operator. The question whether

there exists a single minimal representation of T which generates all minimal

factorizations of T is related to the question of uniqueness of a minimal

representation. Two representations ~1 = (B1(·),C1(·);Pl) and ~2 = (B2(·),C2(·);P2) of T are called similar if there exists an invertible

operator S : Xl ~ X2, acting between the internal spaces of ~1 and ~2' such

that

(2.10)

It may happen that T has non-similar minimal representations. In fact, the

minimal representations ~ and ~* of the operator TO in the preceding example

are not similar. We say that T has up to similarity a unique minimal repre­sentation if any two minimal representations of T are similar. Similar minimal

representations of T generate the same set of minimal factorizations of T.

Thus, if T has up to similarity a unique minimal representation, then Theorem

2.1 implies that each minimal representation of T generates all minimal

factorizations of T. It is not known whether the converse of the latter state­

ment holds true.

(3.1)

1.3 Minimal factorization of Volterra integral operators (1)

This section concerns Volterra integral operators on L2([a,b],Y):

t (Ttp)(t) = tp(t) + f k(t,s)tp(s)ds, a ~ t ~ b.

a

Throughout the section we assume that the kernel k admits a representation of

the form:

(3.2) k(t,s) = F(t)G(s), a ~ s ~ t ~ b,

Page 175: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 165

where F(t) : X + Y and G(t) : Y + X are linear operators acting between

finite dimensional inner product spaces and as functions F(·) and G(·) are

analytic on [a,b]. The space X may vary with T and the representation of its

kernel. Note that the kernel k of T is semi-separable, and hence T is a (SK)­

integral operator. In §III. 5 we shall see that the degree of T is given by

(3.3) 8(T) = rank(F ® G).

Here F and G are the analytic functions in the representation (3.2) of the

kernel k of T and F ® G stands for the finite rank integral operator on

L2([a,bl,Y) with kernel F(t)G(s), that is,

b (3.4) (F ® G)\jl(t) = F(t) f G(s)\jl(s)ds, a ~ t ~ b.

a

Operators T of the type described in the previous paragraph will be

called (SK)-integral operators in the class (AC) (the letters AC stand for

analytic causal). Let T be an operator in this class. Later (in §III.5) we

shall prove that a minimal representation 6 of T always has the form

(B(t),C(t);O) with B(.) and C(.) analytic on [a,b]. In particular, the in­

ternal operator P of 6 is the zero operator. Note that P = 0 implies that the

conditions (i) and (iii) in the definition of a decomposing projection are

automatically fulfilled. Hence a projection rr is a decomposing projection for

a minimal representation 6 of T if and only if

det (I - rr + r/ ( t ) rr) t. 0, a ~ t ~ b,

where r/(t) is the fundamental operator of 6.

3.1 THEOREM. Let T be a (SK)-integral operator on L2( [a,b] ,Y) -in

the class (AC), and let 6 = (B(·),C(·);O) be a minimal representation of T.

(i) If rr is a decomposing projection for 6, then a minimal

factorization T = T1T2 is obtained by taking

t (3.6) (T1lP)(t) = \jl(t) + f C(t)(I - rr(s) )B(s)\jl(s)ds, a ~ t ~ b,

a t

(3.7) (T2\jl)(t) = \jl(t) + f C(t)IT(t)B(s)\jl(s)ds, a ~ t ~ b. a

Here

(3.8) rr(t) = r/(t)IT(I - rr + r/(t)rrf1rr, a ~ t ~ b,

Page 176: Constructive Methods of Wiener-Hopf Factorization

166 Gohberg and Kaashoek

~ith Q(t) being the fundamental operator of ~.

(ii) If T = T1T2 is a minimal factorization, then the factors are

in the class (Ae) and there exists a unique decomposing projection II for ~

such that T1 and T2 ~re given by (3.6~ and (3.7), respectively.

Note that according to part (ii) of the previous theorem all minimal

factorizations of a (SK)-integral operator T in the class (AC) may be gene­

rated from a single minimal representation of T. The latter statement does

not remain true if one drops the analyticity condition on the functions F(')

and G(') in (3.2). To see this consider on L2[0,1] the integral operator

t (T\p)(t) = \p(t) + 6 (1+X[O,u(t)X[O,u(S))\P(S)dS, 0 ~ t ~ 1.

Here XE stands for the characteristic function of the set E. We shall construct

two minimal factorizations

(3.10) v = 1,2,

and we shall prove (in § III. 5) that there does not exist a single minimal

representation of T which generates the two minimal factorizations in (3.10).

Introduce the following functions on 0 ~ t ~ 1:

gl = X[O,~)'

e2t -1

e2t + 1 '

r 1(t) _t+l

= (c-l)e 4 + 1,

_t+l de 2,

~ { r 1(tl r 2(t) _t+l de 2

l+d-de-t+~ ,

0

o ::; t ::; ~,

1 ~ t ~ 2,

~ t ~ 1,

~ t ~ 1 2,

~ t ~ 1.

Here c and d are chosen in such a way that r 1(·) and r 2(') are continuous on

[0,1]. For v = 1,2 let Tiv ) and T~v) be the integral operators on L2[0,1]

defined by

Page 177: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 167

t + I (1 + r (s)11: (s) )\p(s)ds, o v '-I)

t + I (X[O l)(t) -r)t))~(s)\p(s)ds. o ,2

To prove that (3.10) holds, consider the lollowing Riccati equation:

(3.11) I ret) 1 reO)

= X[O,~)(t) +r(t)g(t)X[O,~)(t) -ret) -r(t)2g(t), 0:0:; t :0:; 1,

= o.

One checks that for g = gl the function r 1 ( .) is a solution of (3.11) and for

g = g2 the function r 2(·) is a solution of (3.11). Using this a direct com­

putation shows that (3.10) holds.

Next, let us prove that the two factorizations in (3.10) are minimal.

For v = 1,2 put

t:.v = ([ 1 l, (1 X [0 1) ( • ) ) ; [0 0 1 ). . ~(.) >2 0 0

(3.12)

Since gl(t) = g2(t) = X[O,~)(t) for 0 :0:; t < ~, one sees that t:.1 and t:.2 are representations of T. It is readily checked that T does not have a represen­

tation with a one dimensional internal space. Thus oCT) = 2. From the defini­tions of T(v) and T(v) it is clear that o(T(V)) = o(T(V)) = 1. So

1 2 1 2

v = 1,2,

and hence the factorizations in (3.10) are minimal. In § III. 5 we shall prove that the operator T in (3.9) has no single minimal representation with de­

composing projections yielding the minimal factorization in (3.10).

It is interesting to note the differences between the above example

and the example given at the end of the previous section. In the example in

the previous section the kernels are analytic functions, but the internal

operators are allowed to be complicated ; in the example given above the

kernels are not analytic, but now the internal operators are all required to

be equal to the zero operator. In both cases there is not a single minimal

representation which generates all minimal factorizations.

Let T be a (SK)-integral operator in the class (Ae). We shall see

in § III. I) that up to similarity T has a unique minimal representation. From

this fact it follows t~t each minimal representation t:. of T generates all

minimal factorizations of T (cf., the last paragraph of § 1.2). Theorem 3.1 (ii)

Page 178: Constructive Methods of Wiener-Hopf Factorization

168 Gohberg and Kaashoek

adds to this that there is a one-one correspondence between all minimal

factorizations of T and all decomposing projections of a given minimal re­

presentation of T.

1.4 Stationary causal operators and transfer functions

This section concerns Volterra integral operators on L2([a,b],Y)

which can be represented in the form

t (T\Il)(t) = \Il(t) + I Ce(t-S)A&P(s)ds, (4.1) a :0;; t :0;; b.

a Here A X -+ X, B : Y -+ X and C : X -+ Y are linear operators and X is a finite

dimensional inner product space. The space X may vary with T and the represen­

tation (4.1). Note that the kernel of the integral operator T in (4.1) is

semi-separable, and hence T is a (SK)-integral operator. The degree of T is

given by

(4.2) o(T) = rank

where n is equal to the dimension of the space X.

An operator T of the type described in the previous paragraph will

be called a (SK)-integral operator in the alass (STC) (the letters STC stand for stationary causal). Obviously, the class (STC) is contained in the class (AC). We shall prove (in §III.6) that each minimal representation 6. of a (SK)­

integral operator in the class (STC) is of the form

In (4.3) one may replace a by any number a.; in particular, a may he replaced

by O. A projection IT is decomposing for the representation (4.3) if and only if

(4.4) a :0;; t :0;; b.

To describe all minimal factorizations with factors in the class

(STC) we need the notion of a supporting projection (cf. [2], §1.1). Let 6. = (e-(t-a.)Ag,Ce(t-a.)A;O) be a representation of T. A projection IT of the

internal space X of 6. is called a supporting projeation for 6. if

(4.5) A Ker IT c Ker IT, (A-Be) rmIT c rmIT.

Page 179: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 169

The definition does not depend on a. If (4.5) holds, then (4.4) is satisfied,

and rence a supporting projection for 1:::.0 in (4.3) is a decarposing proj ection. The

cmverse of the latter statement does not hold (see the exanple after Theorem 4.1).

4.1 THEOREM. Let T be a (SK)-integpal opepatop on L2([a,b),Y) in the

class (STC), and let 6 = (e-tAB,CetA;O) be a minimal peppesentation of T.

(i) If IT is a suppopting pr'ojection fop 6, then a minimal

factopization T = T1T2 is obtained by taking

(4.6)

(4.7)

t (T1tp)(t) = tp(t) + J Ce(t-s)A(I - IT)Btp(s)ds, a ~ t ~ b,

a t

(T2tp)( t) = tp( t) + J CITe (t-s )ABtp( s )ds, a

a ~ t ~ b.

The factops Tl and T2 ape (SK)-integpal opepatops in the class (STC).

(ii) If T = T1T2 is a minimal factopization of T with factops T1

and T2 in the class (STC), then thepe exists a unique suppopting ppojection IT

fop 6 such that T1 and T2 ape given by (4.6) and (4.7), pespectively.

l~ote that the definition of a supporting projection does not involve

the interval [a,b). It follows that under the conditions of Theorem 4.1 (i)

the factorization T = T1T2 in Theorem ~.1 (i) also holds when [a,b] is

replaced by any other interval.

An integral operator T in the class (srC) may have minimal factori­

zations T = T1T2 for which the factors are not in the class (STC). To see this,

define To on L2[0,1] by

t t-s (T04J)(t) = 4J(t) + f (2 + t - s)e 4J(s)ds, 0 ~ t ~ 1.

o The triple

6 = ([ e -t 1, (( 1 +t ) e t e t) ; [00 00]) o (1-t)e-t ,

is a minimal representation of TO' Note that 60 is of the form (4.3) with

B= [1], ::. C=(11), a = O.

Consider on ¢2 (the internal space of 60) the projection

(4.8) [0 0 I

11= 0 d'

Page 180: Constructive Methods of Wiener-Hopf Factorization

170 Gohberg and Kaashoek

Using (4.4) with a = 0 one computes that IT is a decomposing projection for 60.

Indeed

n(t) -tA t(A-OC) := e e =

and hence

det(I - IT + n(t)IT) [ -t 1 1 -te

= det # 0 o (1+t2)e-t

for 0 ~ t ~ 1. So IT is a decomposing projection for 60' and we may apply

Theorem 3.1 (i) to 60 and IT. Put

(T1t:>)(t) = (j)(t) + ~ (1+t) (:2+:Jet - S(j)(s)dS, 0 ~ t ~ 1,

(T2(j»)(t) = (j)(t) + ~ (:+~t2) (l-s)et - s(j)(s)ds, 0 ~ t ~ 1.

It follows that TO = T1T2 is a minimal factorization of TO. But the factors

T1 and T2 are not in the class (STC). Note that the projection IT in (4.8) is

a decomposing projection for 60 but not a supporting projection.

Another example of the type mentioned above is provided by the fol­

lowing integral operator:

(T [::] )Ct) "[~::: H [t~s :][::::}s, O~t~1.

The operator T acts on L2([0,1],¢2), and obviously T is a (SK)-integral

operator in the class (STC). One can show that this operator has no non-trivial

minimal factorization with factors in the class (STC). On the other hand, if

(T2 [(j)1] )( t) = [(j)1 (t) ] + I [0 0] [ (j)1 (s) ] ds, '(j)2 (j)2(t) 0 -s 1 (j)2(s)

on 0 ~ t ~ 1, then T = T1T2 and this factorization is a non-trivial minimal factorization of T.

We conclude this section with a remark in which we compare Theorem

4.1 to the main minimal factorization theorem in [2,3]. In fact, we shall see

Page 181: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 171

that Theorem 4.1 is just the integral operator version of Theorem 4.8 in [2].

In what follows Y = qm. Let (TF) be the class of all rational m x m matrix

functions W such that W is analytic at "" and W( "") is equal to the m x m identi­

ty matrix I . A function W in the class (TF) may be viewed as the transfer m

function (see [2], §1.1) of a causal time invariant linear system. This means

that W may be written in the form

(4.9) W(A) = I + C(A -Af1B, m

where A X ~ X, B : qm ~ X and q : X ~ qm are linear operators and X is a

finite dimensional inner product space. The right hand side of (4.9) is said

to be a realization of W. The space X, which may vary with the realization and

with W, is called the state space of the realization.

Take W in the class (TF), and assume that W is given by (4.9).

Define TW to be the (SK)-integral operator on L2([a,b],qm) with kernel

1 Ce(t-S)AB, a ~ s ~ t ~ b, k(t,s) =

> 0, a ~ t ~ s ~ b.

Thus TW is given by the right hand side of (4.1). The definition of TW does not

not depend on the particular choice of the realization of W. To see this,

note that

Ce(t-S)AB = ~ ~(t-sPw +1' n=O n. n

where Wn is the n-th coefficient in the Taylor expansion of W at "". The map

(4.10)

is a bijection of the class (TF) onto the class (STC) and this map preserves the multiplication in (TF) and (STC), that is,

(4.11) TW1W2 = TW1 TW2

for W1 and W2 in (TF).

For W in the class (TF) the McMillan degree O(W) of W is the smal-

lest possible state space dimension of a realization of W. A realization of W

is called a minimal realization of W if its state space dimension is equal to 8(W). If W = W1W2 with W1 and W2 in (TF), then

(4.12)

Page 182: Constructive Methods of Wiener-Hopf Factorization

172 Gohberg and Kaashoek

and W = W1W2 is called a minimal factorization if equality holds in (4.12).

Let W be given by (4.9). Then o(W) is also equal to the right hand side of

(4.2) (cf. [2], §4.2). This implies that

(4.13)

and hence 1m + C (>.. - A) -lB is a minjmal realization of W if and only if

~ = (e-tAB,CetA;O) is a minimal representation of TW. Furthermore, the right

hand side of (4.11) is a minimal factorization in the class (STC) if and only

if W1W2 is a minimal factorization in the class (TF). Hence for functions in

the class (TF) Theorem 4.1 translates into the following theorem.

4.2 THEOREM. Let W(>..) = 1m +C(>.. -A)-lB be a minimal realization of

the rational m x m matrix function W, and let X be the state space of the

realization.

(i) Let IT be a projection of X such that

(4.14) A Ker IT c Ker IT, (A - BC) rmIT c rmIT.

Then a minimal factorization W = W1W2 of W is obtained by taking

(4.15)

(4.16)

= 1 + C(>..-A)-l(l-IT)B, m

= 1 + CIT(>..-A)-lB• m

(ii) Conversely, if W = W1W2 is a minimal factorization of W, then

there exists a unique projection IT of X for which (4.14) holds such that the

factors Wi and W2 are given by (4.15) and (4.16), respectively.

Theorem 4.2 is the minimal factorization theorem for rational matrix

functions as it appears in [2], §4.2 and [3], §2.3.

1.5 SB-minimal factorization (1)

Let T be a (SK)-integral operator on L2([a,b],Y). From §1.1 we know

that T has representations (B(.),C(.);P) for which P is a projection. Such a

representation we shall call a SB-representation. (The letters SB refer to

separable boundary conditions; see [8], §7). Thus a representation ~ of T is

a SB-representation if and only if the internal operator of ~ is a projection.

A representation ~ of T will be called a SB-minimaZ representation of

'I' j f' ~ j s a SB-representation of T and among all SB-representations of T the

Page 183: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 173

dimension of the internal space of ~ is as small as possible. We define the

SB-degree of T (notation: E(T)) to be the dimension of the internal space of a

SB-minimal representation of T. The SB-degree of T is also given by

Here F e G stands for the finite rank integral operator with kernel F(t)G(s)

(see formula (3.4)) and in (5.1) the minimum is taken over all possible re­

presentations of the kernel k of T in the form (1.2).

Obviously, E(T) ~ 6(T), but it may happen that E(T) > 6(T). In fact,

if T is the (SK)-integral operator on L2[0,1] given by

t 1 (T~)(t) = 2 f ~(s)ds + f ~(s)ds, ° ~ t ~ 1,

o t

then E(T) = 2 and 6(T) = 1 (cf. the example at the end of §7 in [8]). If T

belongs to the class (AC), then (see §I.3) the internal operator of a minimal

representation of T is equal to the zero operator (which is a projection), and

thus in this case a minimal representation is automatically SB-minimal. It

follows that for a (SK)-integraZ operator in the cZass (AC) the SB-degree E(T)

is equaZ to the degree 6(T).

Like the degree the SB-degree has a sub logarithmic behaviour:

Given a (SK)-integral operator T on L2([a,b],Y) we call T = T1T2 a SB-minimaZ factorization of T if T1 and T2 are (SK)-integral operators on L2([a,b],Y)

and the SB-degree of T is the sum of the SB-degrees of T1 and T2, i.e.,

In the same way as for minimal factorizations one may obtain SB-minimal

factorizations of T from decomposing projections of SB-minimal representations

of T. In fact, Theorem 2.1 remains true if everywhere in the theorem the word minimaZ is repZaced by SB-minimaZ (see §III.3).

In general, the SB-minimal factorizations of T cannot be generated

from a single SB-minimal representation, but one has to employ various SB­

minimal representations of T. To see this one can use the (SK)-integral

operator T on L2[0,1] defined by (3.9) and the two factorization in (3.10).

First of all, since the representation ~v in (3.12) is a SB-representation,

Page 184: Constructive Methods of Wiener-Hopf Factorization

174 Gohberg and Kaashoek

we have E(T) = oCT) = 2. From the definitions of T~v) and T~v) one sees that E(T~v)) = E(T~v)) = 1. So

E(T) = E(T~v)) + E(T~v)), v = 1,2,

and it follows that the two factorizations in (3.10) are also SB-minimal

factorizations. From E(T) = oCT) we know that a SB-minimal representation of

T is automatically a minimal representation, and hence there does not exist

a single SB-minimal representation which generates the two SB-minimal

factorizations L~ (3.10).

1.6 SB-minimal factorization in the class (USB)

Let T be a (SK)-integral operator on L2([a,b],Y). We say that T

belongs to the class (USB) if up to similarity T has a unique SB-minimal

representation. In the terminology of [8] this is equivalent to the require­

ment that the kernel k of T is both lower unique and upper unique. If the

kernel k of T admits a representation (1.2) in which the functions F1(·), G1(·), F2(·) and G2(·) are analytic on [a,b], then T is in the class (USB)

(cf. [8], §5). The (SK)-integral operator T defined by (3.9) is not in the

class (USB); in fact, the representations ~1 and ~2 of T defined by (3.12) are

SB-minimal and not similar.

6.1 THEOREM. Let T be a (SK)-integral operator on L2([a,b],Y) in the

class (USB), and let ~ = (B(·),C(·);P) be a SB-minimal representation of T. (i) If IT is a decomposing projection for ~, then a SB-minimal

factorization T = T1T2 is obtained by taking T1 and T2 to be the (SK)-integral

operators on L2([a,b],Y) of which the kernels are given by

{ C(t)(I-P)(I-IT(5))B(5), a $ s < t $ b, (6.1) k1(t,s) =

-C(t)P(I - IT(s) )B(s), a $ t < s $ b,

{ C(t)IT(t)(I-P)B(5), a $ S < t $ b, (6.2) k2(t,s) =

-C(t)IT(t)PB(s) , a $ t < s $ b, respectively. Here

(6.3) IT(t) = rl(t)IT(I - IT + rl(t)IT)-1IT , a ::; t $ b,

with rl(t) being the fundamental operator of ~.

(ii) If T = T1T2 is a SB-minimal factorization, then the factors T1

Page 185: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 175

and T2 are in the class (USB) and there exists a unique decomposing projection

IT for ~ such that the kernels of Tl and T2 are given by (6.1) and (6.2),

respectively.

For minimal factorization the analogue of the uniqueness statement

L~ Theorem 6.1 (ii) is not known. To make the problem more precise, let T be

a (SK)-integml operator and assume that up to similarity T has a unique mini­

mal representation, ~ say. Let T = T1T2 be a minimal factorization. Then we

know (see the last paragraph of Section 1.2) that there exists a decomposing

projection IT for ~ which yields the factorization T = T1T2. The question

whether IT is unique is open.

1.7 Analytic semi-separable kernels

A (SK)-integral operator T on L2([a,b],Y) is said to be in the class

(A) (the letter A stands for analytic) if the kernel k of T admits a represen­

tation (1.2) in which the functions F1(·), G1(·), F2(·) and G2(·) are analytic

on [a,b]. In that case the kernel k of T is lower unique and upper unique (see

[8], §5), and hence the class (A) is contained in the class (USB). The SB­

degree £(T) of an operator T in the class (A) is equal to

where F1(·), G1(·), F2(·) and G2(') are the analytic functions in the repre­

sentation (1.2) of its kernel (and F ® G is defined by (3.4)). Let T be a (SK)-integral operator in the class (A), and let ~ =

(B(·),C(·);P) be a SB-minimal representation of T. It is easy to prove

that in this case the functions B(·) and C(·) are analytic on [a,b]. Thus B(t)C(t) depends analytically on t E [a,b], and hence (see [4], §VI.l) the solution of the equation (2.2) is analytic on [a,b]. In other words the

fundamental operator Q(t) of a SB-minimal representation of T is analytic on

[a,b]. From these remarks it is clear that Theorem 6.1 remains valid if in

the theorem the class (USB) is replaced by the class (A).

1.8 LU- and UL-factorizations (1)

Consider on L2([a,b],Y) the integral operator

b (8.1) (Tljl)(t) = ljl(t) + I k(t,s)ljl(s)ds, a ~ t ~ b, a.e.,

a

Page 186: Constructive Methods of Wiener-Hopf Factorization

176 Gohberg and Kaashoek

and assume that the kernel k is square integra.ble on [a,b] x [a,b]. The

integral operator T is said to admit a LV-factorization (see [11],

§IV.7) if T can be factored as T = T_T+, where

t (T_~)(t) = ~(t) + f k (t,s)~(s)ds, a $ t $ b, a.e.,

a b

+ f k+(t,s)~(s)ds, t

a $ t $ b, a.e. ,

and the kernels k_ and k+ are square integrable on a $ s < t $ b and

a $ t < s $ b, respectively. A factorization of the form T = T+T_ is called a

VL-factorization.

8.1 THEOREM. Let T be a (SK)-integral operator on L2([a,b],Y)

with kernel k, and let ~ = (B('),C(');P) be a SB-representation of T. The

fundamental operator of ~ is denoted by ~(t). The following statements are

equivalent:

(1) T admits a LV-factorization;

(2) the internal operator P of ~ is a decomposing projection for ~;

(3) the following Riccati differential equation has a solution

(8.2)

R( t) : Irn P .... Ker P on a $ t $ b:

f R(t) = (I-P+R(t)P)(-B(t)C(t))(R(t)P-P),

1 R(a) = 0;

(4) det(I-P+~(t)P) 1- 0 for a $ t $ b;

a $ t $ b,

(5) the operator Ta : L2([a,a],Y) .... L2([a,a],Y), defined by a

(8.3) (Ta~)(t) = ~(t) + r k(t,s)~(s)ds, a $ t $ a,

is invertible for each a < a $ b.

Furthermore, in that case T = (I + K )( I + K ) with - + t

(8.4) (K_~)(t) = f C(t)(I-P+R(s)P)B(s)~(s)ds, a b

(8.5) (K+~)(t) = f C(t)(R(t)P- P)B(s)~(s)ds, t

where R( t) : Irn P .... Ker P, a $ t $ b, is the so lution of the Riccati equation (8.2), which is equal to

(8.6) R(t) = (I-P){I-P+rI(t)p}-lp.

Page 187: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 177

8.2 THEOREM. Let T be a (SK)-integral operator on L2( [a,b] ,Y) with

kernel k, and let 6 = (B('),C(');P) be a SB-representation of T. The funda­

mental operator of 6 is denoted by ~(t). The following statements are equi­valent:

(1) T admits a UL-factorization;

(2) the following Riccati differential equation has a solution R( t) : Ker P -+ 1m P on a s t s b:

! R(t) = (P+R(t)(I-P))(-B(t)C(t))(R(t)(I-P) - (I-P)), as t s b, (8.7)

R(b) = 0;

(3) det [(I-P)~(t) +m(b)] 1- 0 for a s t s b;

(4) the operator TS : L2([S,b],Y) -+ L2([S,b],Y), defined by

b (8.8) (TS~)(t) = ~(t) + J k(t,s)~(s)ds, Sst s b,

S

is invertible for each asS < b;

Furthermore, in that ease T = (I + H +)( I + H _) wi th

t (8.9) (H_~)(t) = J C(t)(I-P-R(t)(I-P))B(s)~(s)ds,

(8.10)

a b

= - J C(t)(P+R(s)(I-P))B(s)~(s)ds, t

where R( t) : Ker P -+ 1m P, a s t s b, is the so lution of the Riccati equation

(8.7), which is equal to

(8.11)

For the positive definite case the main statements of Theorems 8.1

and 8.2 were proved in [13]. The equivalence of the statements (1) and (5) in

Theorem 8.1 and the equivalence of the statements (1) and (4) in Theorem 8.2

are well-known (see [11], §IV.7). For the equivalence of the statements (1),

(4) and (5) in Theorem 8.1 and for the equivalence of the statements (1), (3)

and (4) in Theorem 8.2 it is not necessary that the internal operator P of the

representation 6 is a projection, and hence these equivalences hold for an

arbitrary representation of T.

From the formulas (8.4) and (8.5) it follows that the factors in a

LU-factorization of a (SK)-integral operator are again (SK)-integral operators.

Furthermore, the equivalence of (1) and (2) in Theorem 8.1 implies that a

Page 188: Constructive Methods of Wiener-Hopf Factorization

178 Gohberg and Kaashoek

LU-factorization of a (SK)-integral operator (assuming it exists) may be ob­

tained from any SB-minimal representation. It follows that a LU-factorization

is a SB-minimal factorization. A similar result holds for UL-factorization

(see §III. 8 )

8.3 COROLLARY. A LU- or UL-faatorization of a (SK)-integral operator

is a SB-minimal faatorization.

In general, a LU- (or UL-) factorization is not a minimal factori­

zation. For example, consider the (SK)-integral operator TO on L2[0,l]

defined by

t 1 (TO~)(t) = ~(t) + 2 f ~(s)ds + f ~(s)ds,

o t (8.12) o ::; t ::; 1.

The operator TO admits a LU-factorization. In fact, TO = T_T+ with

t -s -1 (T_~)(t) = ~(t) + 2 f (2-e ) ~(s)ds, 0::; t::; 1,

o t -1 1

(T+~)(t) = ~(t) + (2e -1) f ~(s)ds, 0 ::; t ::; 1. t

Next, note that TO is represented by (1,1;-1). Hence the degree o(T) = 1. But

then it follows that a minimal representation of TO has no non-trivial de­

composing pr~jection. Thus the LU-factorization To = T_T+ is not generated by a minimal representation of TO' Hence this factorization is not minimal.

II. CASCADE DECOMPOSITION OF SysrEMS

In this chapter we develop a cascade decomposition theory for time

varying linear systems with well-posed boundary conditions. The first section

has a preliminary character. In Sections 2 and 3 we define cascade decomposi­

tions and decomposing projections. The main results are stated in Section 4 and

and are proved in Sections 5 - 7. The last section is devoted to cascade

decompositions of inverse systems.

II.1 Preliminaries about systems with boundary conditions

In this chapter we consider time varying linear systems with

boundary conditions of the following form:

( 1.1) j x(t) = A(t)x(t) + B(t)u(t),

y(t) = C(t)x(t) +u(t)

N1x(a) + N2x(b) = o.

a :5 t :5 b,

a ::; t ::; b,

Page 189: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 179

Here A(t) : X ~ X, B(t) : Y ~ X and C(t) : X ~ Y are linear operators acting

between finite dimensional inner product spaces. As a function of t the main

coefficient A(t) is assumed to be integrable on a ~ t ~ b. The input coeffi­

cient B(t) and the output coefficient C(t) are square integrable on a ~ t ~ b.

Throughout this chapter the input/output space Y is kept fixed; the state

space X may differ per system. Note that the external coefficient (i.e., the

coefficient of u(t) in the second equation of (1.1)) is assumed to be equal

to the identity operator on the space Y. If the main coefficient, input

coefficient and output coefficient do not depend on t, then (1.1) is called

time invariant. The boUndary conditions of (1.1) are given in terms of two

linear operators Nl and N2 acting on X. The system is called causal if Nl = I (where I stands for the identity operator on X) and N2 = 0; if Nl = 0 and

N2 = I, then (1.1) is said to be anticausal. In what follows the system (1.1)

will be denoted by e and for the sake of brevity we write

(1.2)

The indices will be omitted when it is clear on which time interval the system

is considered.

Let e be as in (1.2). The system

is called the inverse system associated with e (see [5], §II.2). Note that one obtains eX by interchanging in (1.1) the roles of the input and output. The main coefficient of eX will be denoted by AX(t); thus

(1. 3) AX(t) = A(t) -B(t)C(t), a :<; t :<; b.

The system e is called a boundary value system if the boundary

conditions of e are well-posed. The latter means that the homogeneous boundary

value problem

x(t) = A(t)x(t), a:<; t :<; b,

has the trivial solution only. Well-posedness of the boundary conditions is

equivalent to the requirement that det(N1 +N2U(b)) ~ o. Here U(t) : X -+ X,

a ~ t :<; b, is the fundamental operator of the system e, i.e., the unique

absolutely continuous solution of

Page 190: Constructive Methods of Wiener-Hopf Factorization

180 Gohberg and Kaashoek

vet) = A(t)U(t), a $ t $ b, U(a) = 1.

The causal and anticausal systems are the classical examples of boundary

value systems.

A boundary value system e has a well-defined input/output map Te,

namely (see [14,15]; [5], §I.2) the integral operator

b yet) = (Teu)(t) = u(t) + f ke(t,s)u(s)ds, a $ t $ b,

a

of which the kernel ke is given by

lC(t)U(t)(I - Pe )U(s)-lB(s), (1.4) ke(t,s) = -1

-C(t)U(t)PeU(s) B(s),

a $ s < t $ b,

a $ t < s $ b,

-1 where Pe = (N1 + N2U(b» N2U(b). It follows that Te is a (SK)-integral

operator on L2([a,b],Y). The operator Peis called the canonical boundary value

operator of e. Let e have well-posed boundary conditions. It does not follow that

the inverse system eX also has well-posed boundary conditions. In fact, the

latter happens if and only if Te is invertible, and in that case (see [5],

Theorem II.2.1)

(1. 5)

(1. 6)

-1 (Te) .

Two time varying systems e1 and e2,

e = (A (t) B (t) C (t)'N(V) N(v»b v V' v 'v '1 '2 a' v = 1,2,

with state spaces Xl and X2, respectively, are called similar (notation:

e1 ~ e2) if there exist an invertible operator E : Xl ~ X2 and an absolutely

continuous function Set) : Xl ~ X2, a $ t $ b, of which the values are inver­

tible operators such that

(1. 7) A2(t) = S(t)A1 (t)S(t)-l + S(t)S(t)-l,

(1. 8) B2(t) = S(t)B1(t),

(1. 9) C2(t) C1(t)S(t)-1,

(1.10 ) N(2) 1

ENi1)s(a)-1, N(2) 2 = EN11)S(b)-1,

Page 191: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 181

almost everywhere on a ~ t ~ b. This notion of similarity appears in a

natural way when in (1.1) the state x(t) is replaced by z(t) = S(t)x(t) (see

[5], § 1. 5). Vie shall refer to S( t ), a ~ t ~ b, as a similarity transformation

between 81 and 82. Formula (1.7) implies that the fundamental operators U1(t)

and U2(t) of 81 and 82, respectively, are related in the following way:

a ~ t ~ b.

Well-posedness of the boundary conditions is preserved under a similarity

transformation and similar systems with well-posed boundary conditions have

similar canonical boundary value operators. In fact, if Pl and P2 are the

canonical boundary value operators of 81 and 82, respectively, then the above

formulas imply that P2 = s(a)P1s(a)-1. It follows that similar systems with

well-posed boundary conditions have the same input/output map.

The system 80 = (AO(t),BO(t),CO(t);NiO),N~O))~ is called a reduction

of the system 8 = (A(t),B(t),C(t);Nl,N2)~ if (see [6,7]) the state space X of

8 admits a decomposition, X = Xl e Xo e X2' such that relative to this de~

composition the coefficients of 8 and its boundary value operators are par­

titioned in the following way:

* * * * (1.11) A( t) = 0 AO(t) * B(t) = BoCt)

0 0 * 0

(1.12 ) C(t) = ( 0 CO( t) * ) ;

* * * (1.13 ) N v E 0 N(O)

* v = 1,2. v 0 0 *

Here (1.11) and (1.12) hold a.e. on a ~ t ~ b. The operator E appearing in

(1.13) is some invertible operator on X. The symbols * denote unspecified

entries. If, in addition, dimX > dimXO' then 80 is called a proper reduction

of 8. The boundary value system 8 is said to be irreducibZe if none of the

systems similar to 8 admits a proper reduction. We say that the system 8 is a

(proper) diZation of 80 if 80 is a (proper) reduction of 8. If the dilation 8

of 80 has well-posed boundary conditions, then the same is true for 80 and the

systems 8 and 80 have the same input/output map.

Page 192: Constructive Methods of Wiener-Hopf Factorization

182 Gohberg and Kaashoek

11.2 Cascade decompositions

In this section we define the notion of a cascade decomposition of

two boundary value systems. First we recall the definition of a cascade con­

nection. For v = 1,2 let 8v = (Av(t),Bv(t),Cv(t);N~v),N~V))~ be a time

varying linear system with boundary conditions. Introduce

(2.1)

(2.2)

__ [A10( t) A( t) B1 ( t )C 2 (t) 1

A2(t)

C2(t)) ,

j = 1,2.

[ B1 (t) 1

B( t) = , B2(t)

By definition ([5], §11.1) the cascade connection of 81 and 82 is the system

where A(·),B(·),C(·),N1,N2 are given by (2.1) - (2.3). The system 8182 appears

in a natural way if one connects the output of 82 to the input of 61 ,

Let 6 be a boundary value system. We say that 6 admits a cascade

decomposition if 8 is similar to a cascade connection 8182 (notation 8 ~ 8182),

Assume 8 ~ 8182, Since similarity preserves the well-posedness of the boundary

conditions, we can apply [5], Theorem 11.1.1 to show that the compounding

systems 81 and 82 have well-posed boundary conditions. It follows that

(2.4)

Thus cascade decompositions yield factorizations of the input/output map.

11.3 Decomposing projections

To describe how cascade decompositions may be constructed we intro­

duce the concept of a decomposing projection for a system. Let 8 = (A(t),

3(t),C(t);Nl,N2)~ be a boundary value system. As before, U(t) denotes the

fundamental operator of 8 and P is its canonical boundary value operator. By

;;Y(t) we denote the fundamental operator of the inverse system 8x. Thus

(3.1) a $ t $ b,

r, f,[,r)jr:ction IT ()f the state space X of 8 is called a decomposing projection

Page 193: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 183

for e if

(i) TIPTI = TIP,

(ii) det(I-TI+U(tf1Ux(t)TI) t 0, a ~ t ~ b,

(iii) (I-TI)PTI = P(I-TI){I-TI+U(b)-1Ux(b)TI}-1TI (I-P).

In §III.2 we shall see that the notion of a decomposing projection for a sys­

tem is a natural extension of the corresponding concept for representations

as defined in §I.2. The following lemma provides an alternative definition

of a decomposing projection.

b 3.1. LEMMA. Let e = (A(t),B(t),C(t);N1,N2)a be a boundary value

system. Let U(t) be the fundamental operator of e and let P be its canonical

boundary value operat.or. A projection TI of the state space X of e is a de

composing projection for e if and only if Ker TI is invariant under P and the

following Riccati differential equation is solvable on a ~ t ~ b:

(3.2) f R(t) = (I-TI+R(t)TI)(-U(tf1B(t)C(t)U(t»(R(t)TI-TI),

1 R(a) = 0 ,

a ~ t ~ b,

and its solution R(t) IroTI -> KerTI, a ~ t ~ b, satisfies the additional

boundary condition

(3.3) P(I - IT)R(b)IT(I - P) = (I - IT)PIT.

F'urthermore, in that case the unique solution of (3.2) is given by

where Ux( t) denotes the fundamental operator of the 1:nverse system eX.

PROOF. Let IT be a proj ection of thE' state X of e. Obviously, con­

dition (i) in the definition of a decomposing projection is equivalent to the requirement that Ker IT is invariant under P. Next consider the ROE (3.2). From

(3.5) ~ U(tf1Ux(t) = (-U(t)-1B(t)C(t)U(t))U(t)-1Ux(t)

it follows that Q(t) = U(t)-1Ux(t), a ~ t ~ b, is the fundamental operator of

the ROE (3.2). But tilerl we can apply the main result of [17] to show that the

ROE (3.2) is solvable if and only if

Page 194: Constructive Methods of Wiener-Hopf Factorization

184 Gohberg and Kaashoek

is invertible for each a ~ t ~ b, and in that case its unique solution on

a ~ t ~ b is given by

The invertibility for each a ~ t ~ b of the operator (3.6) is equivalent to

condition (ii). Furthermore, (3.4) is just a reformulation of (3.7). Since

R(.) is given by (3.4) it is clear that condition (iii) in the definition of a

decomposing projection is the same as the extra boundary condition in (3.3). 0

11.4 Main decomposition theorems

Let 8 = (A(t),B(t),C(t);N1,N2)~ be a boundary value system. The

fundamental operator of 8 is denoted by U(t) and P is its canonical boundary

value operator. Let n be a decomposing projection for 8. Write U(t)-lB(t),

C(t)U(t) and Pas blockrratrices relative to the decomposition X = Kern e Imn:

Introduce the following systems:

~ ( 8) . - (0, B1 ( t) + R ( t ) B2 ( t ) ,C 1 ( t ) ; I - P 11' P 11 ) ~ ,

,'tn ( e) . - (0, B2 ( t ) ,- C 1 (t ) R ( t ~ + C 2 ( t ) ; I - P 2 2 ' P 22 ) ~ .

Here R(t) is the solution of the RDE (3.2) (or, equivalently, R(t) is given by

(3.4)) and the other terms come frome (4.1) and (4.2). We call In(8) the left

factor and ,'tn( 8) the right factor associated with n and 8. This terminology is

justified by the next theorem.

4.1. THEOREH. If n is a decomposing projection for 8, then

and the similarity Set) between 8 and In(e)'tn(e) is given by

[ IK n R ( t) ) _1

Set) = ,er J :J(t) ~: KerIl e Imn ->- Kern e Imn, a <; t <; b, o lIm n

Page 195: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 185

where R(·) is the solution of the RDE (3.2) or, equivalently, R(·) is given

by 0.4).

The next theorem shows that up to similarity all cascade decomposi­

tions of 8 may be obtained from decomposing projections.

4.2. THEOREM. Assume 8 ~ 8182 is a cascade decomposition of the

boundary value system 8. Then there exists a decomposing projection II for & such that

Formula (4.4) leads to the following definition. 1wo cascade de­

compositions 8 ~ 6182 and 6 ~ 8182 of 8 are said to be equivalent if 81 ~ 81 and 82 ~ 82.

4.3. THEOREM. Let 8 be an irreducible system. Then the map

defines a one-one correspondence between all decomposing projections II for 6

and all non-equivalent cascade decompositions of 8. Furthermore, the factors

~(8) and ~n(8) are irreducible.

The proofs of Theorems 4.1 - 4.3 will be given in the next three

sections.

The systems In(8) and ~II(8) are boundary value systems and the Ker-nels of their input/output maps are, respectively, given by

1 G(t )U(t)(1 - P)( 1- IT( c) )U( c)-18(,), a ~ s < t ~ b, (4.6) K1(t,s) =

-C( t )U( t )P( 1- II( s) )U( s f1B( s), a ~ t < s ~ b, r t )U( t)IT(t)(1 - plU( c) -1n( c), a ~ s < t ~ b, (4.7) K2(t,S) =

-C(t)U(t)II(t)PU(s)-lB(s), a ~ t < s ~ b.

Here for a ~ t ~ b the operator rr(t) : X ~ X is given by

Since IIR(t) = 0, the operator II(t) is a projection for each t E [a,b]. Note

that Theorem 4.1 implies that

Page 196: Constructive Methods of Wiener-Hopf Factorization

186 Gohberg and Kaashoek

11.5 Proof of Theorem 11.4.1

In this section we derive TI1eorem 4.1 as a corollary of a more

general cascade decomposition theorem. Let e = (A(t),B(t),C(t);Nl,N2)~ be a given boundary value system. As usual U(t) and UX(t) denote the fundamental

operators of e and eX. The operator P is the canonical boundary value operator

of e and AX(t) = A(t)-B(t)C(t).

Let X = Xl $ X2 be a direct sum decomposition of the state space X

of e. Relative to this decomposition we write:

(5.1) A(t) = [ A11 (t I A12(tl] AX(t)

[ A~, (tl A~2(tl ] =

A21(t) A22(t) A21(t) A22 (t)

(5.2) U(t) [U11 (tl U12(tl] , UX(t) [U~l (tl d,'2(tl] , = =

U21 (t) U22(t) U21(t) ~2(t)

["'(tl] C(t) (C1(t) C2(t)) , B(t) = = B2(t)

(5.4) ~ [P11 P12 ] P - • P21 P22

Consider the following Riccati differential equations:

1 R21(t) = -A21(t) -R21(t)All(t) +A22(t)R21(t) +R21(t)A12(t)R21(t), (5.5) a~t~b,

R21 (a) = 0

1 R12(t) = -A~2(t) -R12(t)A;2(t) +A~1(t)R12(t) +R12(t)A;1(t)R12(t),

(5.6) a~t~b,

R12 (a) = o.

Note that U(t) and UX(t) are the fundamental operators of the RDE's (5.5) and

(5.6), respectively. It follows (cf. [17]) that (5.5) is solvable on

a ~ t ~ b if and only if det U11(t) t 0, a ~ t ~ b, and in that case the unjque

solution of (5.5) is given by

(5.7) a ~ t ~ b.

Similarly, equation (5.6) is solvable on a ~ t ~ b if and only if

Page 197: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization lb7

detU;2(t) ~ 0, as t s b, and in that case the unique solution of (5.6) is

equal to

(5.8) a s t s b.

5.1 THEOREM. Assume that the Riccati equations (5.5) and (5.6) ape

solvable on a s t s b and that theip solutions have the following extpa

ppopepty:

a s t s b.

FupthePmope assume that

(5.10)

whepe

(5.11)

Then e admits a cascade decomposition, e ~ 8182, with similapity

[ r R1r2 (t) 1 S( t) =

R21 (t) (5.12) a s t s b,

and the compounding systems ape given by

whepe

A1 (t) = (All (t) + R12(t)A21 (t) - A12(t)R21 (t) +

. -1 - R12(t)A22(t)R21(t) -R12(t)R21 (t))6(t) ,

E\(t) =B1(t) +R1/t)B2(t),

~ -1 C 1 ( t) = (C 1 ( t) - c 2 ( t ) R21 ( t ) ) ll( t ) ,

Page 198: Constructive Methods of Wiener-Hopf Factorization

188 Gahberg and Kaashaek

A2(t) = (A22 (t) +R21 (t)A12(t) -A21 (t)R12(t) + . -1

- R21 (t)All (t)R12(t) - R21 (t)R12(t) )V(t) ,

B2(t) = R21(t)B1(t) +B2(t),

~ -1 C2(t) = (-C1(t)R12(t) +C2(t))V(t) ,

N(2) = I-P 1 22'

Here ll(t) = I-R12(t)R21(t) and Vet) = I-R21(t)R12(t).

PROOF. The solvability of the Riccati equation (5.5) implies that

detU11(b) # O. Thus U11(b)-1 exists. Next, observe that the invertibility of

U11(b) and U(b) yields the invertibility of the Schur complement -1 U22 (b) -U21(b)Ull (b) U12(b) (cf. [2], Remark 1.2). From (5.9) we may conclude

that both ~(t) and Vet) are invertible for each a $ t $ b. It follows that the

various operators and operator functions in Theorem 5.1 are well-defined.

Since ~(t) and Vet) are invertible, the same is true for set). In

fact,

[ ll(t)-l -R12(t)V(t)-1 1

-R21(t)~(t)-1 V(t)-l ' a $ t $ b.

Thus Set) establishes a similarity transformation in the state space X = Xl E& X2. Let e be the system which one obtains when the similarity trans­formation Set), a $ t $ b, is applied to e. So

where

A(t) = S(t)A(t)S(t)-l + S(t)S(t)-l,

B( t) = S( t )B( t), G(t) = C(t)S(t)-l, vet) = S(t)U(t).

From Sea) = I, it follows that vet) is the fundamental operator of 8, and

hence e ~ e indeed.

The fact that R21 (·) is the solution of the Riccati equation (5.5)

on a $ t $ b implies that relative to the decomposition X = Xl E& X2 the opera­tor A(t) admits the rollowing partitioning:

Page 199: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 189

a $; t $; b.

Here A1(t) and A2(t) are the main coefficients of 81 and 82, respectively.

Next, observe that

Since R12(·) is the solution of the Riccati equation (5.6) on a$; t $; b, it

follows that AX(t) admits the following partitioning:

a $; t :0; b.

By a direct computation one shows that

~1 (t) 1 ' B( t) =_ B2(t)

where for v = 1,2 the operator functions Bv(·) and ev(·) are the input coef­

ficient and output coefficient of 8 , respectively. We conclude that e and the v cascade connection 8182 have the same coefficients.

Next we compare the boundary value operators of e and 8182. Put

where Q is given by (5.11). We shall prove that

(5.13) E(I -P) " [ N: 1) N\2)] , Eru(b f' " [N:') N~2)]' The first equality in (5.13) is a direct consequence of (5.10) and of the definitions of N~l) and N~2). To prove the second equality in (5.13) we first

compute D(b)-l. Since D(b) = S(b)U(b), we have

- [ Mb)Ul~(b) Mb)U11(b)Q 1 U(b) = J. -1

o U22 (b) - U21 (b)U11 (b) U12(b)

Here we used (5.7) and the fact that Q is given by (5.11). It follows that

[p P Q] [T T] EFU(b)-l = 11 11 D(b)-l = 11 12 , o P22 0 T22

Page 200: Constructive Methods of Wiener-Hopf Factorization

190 Gohberg and Kaashoek

where

Tll = P U (b)-l~(b)-l = N(l) 11 11 2 '

-1 -1 T12 = - PllQ(U22(b) - U21 (b)U11 (b) U12(b)) +

-1 -1 + PllQ(U22(b) -U21 (b)Ull (b) U12(b)) = 0,

-1 -1 (2) T22 = P22 (U22 (b) -U21 (b)U11(b) U12(b)) = N2 .

This proves the second equality in (5.13). From (5.13) and the fact that 8 and

8182 have the same coefficients it is clear that e ~ 8182, and thus

8 ~ 8182, 0

Theorem 5.1 simplifies considerably if the main coefficient A(') of

8 is zero. In fact, if A(') = 0, then U(t) = I, a ~ t ~ b, and the Riccati

equation (5.5) reduces to a trivial equation which is solvable on a ~ t ~ b

and has R21(t) = 0, a ~ t ~ b, as its unique solution. It follows that in this

case condition (5.9) is automatically fulfilled and condition (5.10) simpli­

fies to

(5.14)

Furthermore, the compounding systems 81 and 82 in the cascade decomposition 8 ~ 8182 are now given by

b PROOF of Theorem 4.1. Let 8 = (A(t),B(t),C(t);N1,N2)a be a boundary

value system, and let IT be a decomposing projection for 8. Introduce the

system

(5.15) -1 b 80 = (O,U(t) B(t),C(t)U(t);I - P,P)a .

Here U(t) is the fundamental operator of 8 and P is its canonical boundary

value operator. Note that the main coefficient of 80 is identically zero and

the main coefficient of the inverse system 8~ is equal to -U(t)B(t)C(t)U(t).

Put X = Xl $ X2, where Xl = Ker IT and X2 = 1m IT. The fact that IT is a decomposing projection for 8 is precisely equal to the statement that the conditions of

Page 201: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 191

Theorem 5.1 are fulfilled for the system 80 (cf. the paragraph preceding this

proof and Lemma 3.1). It follows that 80 ~ 8182' and one checks that 81 = IIT(8) and 82 = ~IT(8). Since 8 ~ 80 one gets the cascade decomposition (4.3).

Furthermore, the fact that the similarity between 8 and 80 is given by U(t)-l

implies that the similarity in (4.3) has the desired form. 0

11.6 Proof of Theorem 11.4.2

We first prove two auxiliary results concerning decomposing pro­

jections. The first shows that decomposing projections are preserved under

similarity.

6.1. PROPOSITION. Let S(t) : Xl ~ X2, a s t s b, be a similarity

transformation between the boundary value systems 81 and 82, and let ITl be a

decomposing projection for 81, Then IT2 := S(a)IT1S(a)-1 is a decomposing pro­

jection fop 82 and

(6.1)

PROOF. Let 81 and 82 be as in (1.6). Since .s(t) : Xl ~ X2, a s t s b,

is a similarity transformation between 61 and 82 we know that the formulas

(1.7) - (1.9) hold true. Further, the fundamental operators U1(t) and U2(t)

and the canonical boundary value operators P1 and P2 of 81 and 82, respectively,

are related in the following way:

(6.2)

Now, assume that TIl is a decomposing projection for 81, and define TI2 .­s(a)TI1(a)-1. Obviously, TI2 is a projection. The operator Sea) : Xl ~ X2 has

the following block matrix representation:

Sea) = [EO FO 1 : Ker TIl $ 1m TIl ~ Ker TI2 $ 1m TI2 .

Here E : Ker TIl ~ Ker IT2 and F : 1m ITl ~ 1m IT2 are invertible operators. For

v = 1,2 write Pv as a 2 x 2 operator matrix relative to the decomposition

X = Ker IT $ 1m IT : v v v

p(v) p(v) 11 12

P =

Page 202: Constructive Methods of Wiener-Hopf Factorization

192 Gohberg and Kaashoek

From the second identity in (6.2) it follows that

(6.3)

(6.4)

Since ITl is a decomposing projection for 61 we know that p~i) = O. Hence P(2) = 0

21 . From formulas (1.7) - (1.9) and the first identity in (6.2) it

follows that

(6.5) -1 -1 U2(t) B2(t) = S(a)U1(t) B1(t),

(6.6) C2(t)U2(t) = C1(t)U1(t)s(a)-1.

Next, consider the RDE:

{ R)t) = (I - ITv + RV(t)IT) ( -U)tflB)t)C)t)U)t) )(R)t)ITv - IT),

(6.7) a~t~b,

R (a) = 0 P(v)R (b)(I - p(v» = p(v) , 11 v 22 12 •

Here v = 1,2. We know that for v = 1 the RDE (6.7) has a solution

Rl (t) : Im IT1 + Ker IT1. Put

a ~ t ~ b.

From (6.3) - (6.6) it follows that R2(·) is the solution of the RDE (6.7) for the case v = 2. We have now proved that IT2 is a decomposing projection for 62, It remains to prove (6.1). Using (6.5), (6.6) and the first identity in (6.3) it is straightforward to check that E : Ker IT1 + Ker II2 provides a similarity transformation which transforms lIT1(6 1) into lII2(62). In the same way, using

(6.5), (6.6) and the second identity in (6.4) one checks that F : ImIT1 + ImIT2

provides a similarity between ~IT1(61) and ~IT2(62)' 0

Next we show that the factorization projections introduced in [5],

§III.1 are decomposing projections. Let 6 = (A(t),B(t),C(t);N1,N2)~ be a boundary value system. Put AX(t) = A(t) -B(t)C(t), a ~ t ~ b. Following [5] we call a projection II of the state space X of 6 a factorization projection

for 6 if II commutes with the boundary value operators N1 and N2 and

(6.8) A(t) Ker IT c Ker IT,

Page 203: Constructive Methods of Wiener-Hopf Factorization

Minimal iactorization 193

for almost all t E [a,b]. Let IT be such a projection. Write A(t), B(t), C(t),

N1 and N2 as block matrices relative to the decomposition X = Ker IT (I) 1m IT:

(6.9)

(6.10) C(t) = ( C1(t) C2(t) ) ,

r N~ 1)

N~ 2) 1 ' (6.11) N - J j = 1,2. j - l 0 J

Note that the triangular form of A(t) follows from the first inclusion in

(6.8). The diagonal forms (6.11) for N1 and N2 are a consequence of the fact

that IT commutes with N1 and N2. Using the partitionings (6.9) - (6.11), we

introduce the following systems:

The symbol pr stands for projection. From [5], Theorem 111.1.1 it follows that

(6.12)

6.2 PROPOSITION. Let IT be a factorization projection for 6. Then IT

is a decomposing projection for 6 and

(6.13)

PROOF. Note that the second inclusion in (6.8) implies that A12(t) =

B1(t)C2(t) a.e. on a ~ t ~ b. Let U(t) be the fundamental operator of e. From

the first identity in (6.9) and the remark just made it follows that

[ U11(t) Vet) 1

U( t) = o U22(t)'

a ~ t ~ b,

where U11(t) and U22 (t) are the fundamental operators of prI_IT (6) and prIT(e),

resepectively, and Vet) = U11(t)H(t) with

t -1_ H(t) = J U1l(s) B1(s)C2(s)U22 (s)dS.

a

By a direct computation one shows that H(·) is the solution of the ROE:

Page 204: Constructive Methods of Wiener-Hopf Factorization

194

(6.14)

Gohberg ana Kaashoek

f B(t) = (I-n+H(t)n)(-U(t)-1B(t)C(t)U(t))(H(t)n-n), a:;; t :;; b,

1 H(a) = o.

From the proof of Theorem 11.1.1 in [5] we may conclude that the canonical

boundary value operator P of e is given by

(6.15)

with

(6.16)

Thus P has the desired triangular form and the solution of (6.14) satisfies

the additional boundary condition (6.16). Hence n is a decomposing projection

for e. Note that

(I - n + H(t)n)U(t)-1B(t) = V11 (t)-1B1 (t),

C(t)U(t)(n - HCt)n) = C2Ct)U22 (t).

It follows that the left and right factor corresponding to n and e are given

by

-srce )

Thus lnce) is similar to prI_nCe) with similarity U11Ct) and ~nce) is similar to prI_nCe) with similarity U22 (t).

PROOF of Theorem 4.2. Assume 9 ~ e192. Write 90 = e1e2. From

Theorem 111.1.1 in [5] we know that eO has a factorization projection nO such

that

C6.17)

From Proposition 6.2 we know that nO is a decomposing projection for 90, Since

9 ~ 90 we can apply Proposition 6.1 to show that 9 has a decomposing pro­jection n such that

Page 205: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 195

Now use (6.13) for 80 and ITO and (6.17). This yields (4.4). o

11.7 Proof of Theorem 11.4.3

Let 8 be an irreducible system. According to Theorem 4.1 the right

hand side of (4.5) is a cascade decomposition of 8 for each decomposing pro­

jection IT for 8. From Theorem 4.2 it follows that each cascade decomposition

of 8 is equivalent to a cascade decomposition of the form tIT(e)~IT(8). Thus

the map (4.5) is well-defined and onto.

To prove that the map (4.5) is one-one we use the irreducibility of

8. Let ITl and IT2 be two decomposing projections for e, and let R1(t) :

1m ITl -+ Ker ITl and R2 (t) : 1m IT2 -+ Ker IT2 be the solutions of the corresponding

RDE's. Assume

~IT (e) ~ ~IT (8). 1 2

We have to show that ITl = IT2. Since the left and right factors of 8 have zero

main coefficients, the similarities in (7.1) do not depend on time. Let

Sl : Ker ITl -+ Ker IT2 give the first similarity in (7.1) and S2 : 1m TIl -+ 1m TI2

the second. Introduce

s = [Sl 0 1 : Ker TIl e 1m TIl -+ Ker TI2 e 1m TI2. o S2

Obviously, S is a similarity between LTI1(e)~IT1(8) and LIT2(8)~TI2(8) (cf. [5], Proposition 11.1.3). But then

Set) = U(t) S

o o

is a similarity of 8 with itself. Since 8 is irreducible, a self-similarity of

8 is trivial, i.e., set) = I for a ~ t ~ b (see [7], Theorem 2.2; also [6],

Proposition 11.3.2 and 11.3.3). In particular, S = Sea) = I, which implies

that ITl = IT2. Thus the map (4.5) is one-one.

Let 8 be irreducible, and let IT be a decomposing projection for 8. It

remains to prove that LIT (8) and ~TI(8) are also irreducible. We know (see [7],

Theorem 2.4) that LIT(8) is similar to a system 81 which is a dilation of an

irreducible system 810 , Similarly, ~TI(8) is similar to a system 82 which is a

dilation of an irreducible system 820 , Clearly, LTI(8)~TI(e) 0- 8182 (cf. [5],

Page 206: Constructive Methods of Wiener-Hopf Factorization

196 Gohberg and Kaashoek

Proposition 11.1.3). Next, apply Lemma 7.1 below. It follows that 8182 is a

dilation of 810820 , and hence 8 is similar to a dilat~on of 810820 . But 8 is

irreducible, and so the latter dilation cannot be proper. It follows that 81 is not a proper dilation of 810 and 82 is not a proper dilation of 820 • Thus

81 and' 82 are irreducible. Since irreducibility is preserved under similarity,

we conclude that In( 8) and !tn( 8) are irreducible. 0

7.1 LEMMA. Let v = 1,2 let the system 8v be a dilation of 8vO • Then

the cascade connection 8182 is a dilation of 810820 .

PROOF. For v = 1,2 let

8 = (A (t) B (t) C (t)·N(v) N(v))b v v'v'v'1'2 a'

_ . (v) (v) b 8vO - (AvO(t),BvO(t),Cvo(t),N10 ,N20 )a

Since 8v is a dilation of 8vO ' the state space Xv of 8v admits a,decomposition

Xv = Xv1 $ XvO $ Xv2 such that relative to this decomposition

* * * * A (t) = 0 AvO(t) * v

0 0 *

C (t) = ( 0 CvO(t) * ) v

* * * N~ = E(v) 0 N(v)

* J jO j = 1,2.

0 0 * Here E(v) is an invertible operator on the space Xv Put X = Xl $ X2, Xl =

X11 $ X21 , Xo = X10 $ X20 and X2 = X12 $ X22 . Then X is the state space of

8182 and Xo is the state space of 810820 . Furthermore

( 7 . 2 ) X = Xl $ :KO $ :K2 •

Put

Page 207: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 1~7

Recall that A(t), B(t), C(t), Nl and N2 are given by (2.1) - (2.3). Consider

the partitioning of A(t), B(t), C(t), Nl and N2 relative to the decomposition

x = Xli $ X21 $ X10 $ X20 $ X12 $ X22 ·

One finds that

* 0 * * * *

0 * 0 * 0 *

0 0 A10(t) B10(t)C20(t) * * A(t) = 0 0 0 A20(t) 0 *

0 0 0 0 * 0

0 0 0 0 0 *

*

*

B( t) = B10(t)

C( t) = ( 0 0 C10(t) C20(t) * ) . * B20 (t)

0

0

* 0 * 0 * 0

0 * 0 * 0 *

0 0 N3~) 0 * 0 N. = E N3~)

j = 1,2, J 0 0 0 0 *

0 0 0 0 * 0

0 0 0 0 0 *

where

[ E:') 0

: E = E(2 )

Page 208: Constructive Methods of Wiener-Hopf Factorization

198 Gohberg and Kaashoek

It follows that relative the decomposition (7.2)

* * *

r BO:tl A( t) = 0 AO(t) * B( t) =

0 0 * l 0

c( t) = [ 0 co( t) * )

* * *

N. = E 0 NjO * J

0 0 *

This shows that 6162 is a dilation of 610620 , 0

lI.8 Decomposing projections for inverse systems

The aim of this section is to prove the following theorem.

8.1 THEOREM. Let the system 6 and its inverse system 6x have well­

posed boundary conditions. Then n is a decomposing projection for 6 if and

only if 1- n is a decomposing projection for 6x , and in that case

(8.1)

PROOF. Let 6 = (A(t),B(t),C(t);Nl,N2)~' Put n(t) = U(t)-lUx(t), a ~ t ~ b, where U(t) and UX(t) are the fundamental operators of 6 and 6x, respectively. Furthermore, let P and pX be the canonical boundary value

operators of 6 and 6x, respectively. It is known (see [5], §§ 11.3 and 11.4)

that

(8.2)

Now assume that n is a decomposing projection for 6. Write P, pX and

n(t) as 2x2 operator matrices relative to the decomposition X = Kern e Imn:

p~[Pl1 P12] pX = [P;l P~2] P21 P22 til P22

n( t) = [ nll (t) n12(t) ] , a ~ t ~ b.

n21 (t) n22(t)

Page 209: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization

The fact that IT is a decomposing projection for e is equivalent to the

following statements:

(8.3) P12 = 0,

(8.4) det rl22 ( t) to, a !> t !> b,

(8.5)

For a !> t !> b set

(8.6) -1 R(t) = -rl12 (t)rl22 (t) ,

(8.7) -1 Mt) = rl11(t) -rl12(t)rl22 (t) rl12(t).

Using (8.3) and (8.5) one finds that

(8.8)

199

It follows that I-Pl1 +Pl1t.(b) and I-P22+P22rl22(b) are invertible. Denote

the right hand side of (8.8) by L. Then we see from (8.2), (8.3) and (8.5)

that

° 1 '

and hence

P~l = (I-Pl1 +P11Mb»-lpl1~(b), P~2 = 0,

x -1 -1 P21 = (I-P22+P22rl22(b» P22rl21(b)(I-Pl1 +P11Mb» (I-P11 ),

x -1 P22 = (I-P22+P22rl22(b» P22rl22 (b).

We prove now that I - IT is a decomposing projection for eX. From

P~2 = 0, it is clear that (I - IT)px(I - IT) = (I - IT)px, and hence for I - IT the

condition (i) in the definition of a decomposing projection is satisfied.

Next, observe that

Since IT is a decomposing projection for e, the last determinant in this

Page 210: Constructive Methods of Wiener-Hopf Factorization

200 Gohberg and Kaashoek

identity is t 0 for each a ~ t ~ b. But then

a ~ t ~ b,

which proves the second condition for a decomposing projection. '1'0 check the

third condition, note that

= pXII{I _ II + Q(b)II}-1Q(b)(I _ II)(I _ pX)

[ 0 0][ l.I(b) 0I]II-oP~1 = 0 P;2 Q22(b)-1Q21 (b)

~ I P~2""2(:)-1"21(b)(I-P;1) ~]. x x x

Now use the formulas for P11' P21 and P22 derived above. One sees that

and hence for I - II condition (iii) also holds true.

Write U(t)-1B(t), C(t)U(t), UX(t)-1B(t), C(t)Ux(t) as block matrices

relative to the decomposition X = Ker II Ell Im II:

Then

~(e) b = (0,B1(t)+R(t)B2(t),C1(t);I-P11,P11)a'

b = (0,B2(t),-C1(t)R(t) +C2(t);I-P22,P22)a

Here R(t) is given by (8.6) and

Page 211: Constructive Methods of Wiener-Hopf Factorization

~linimal factorization 201

A simple computation shows that RX(t) = ~22(t)-1~21(t) for a ~ t ~ b, and

hence

[ I R( t) 1 [I ~(t )

o I _Rx(t) o 1 [Mt) 0 1 I - 0 ~22(t)

for a ~ t ~ b. The latter identity can be rewritten in the form:

(8.9) U(t) = R( t) 1 -1 [fJ.o( t ) 0 1 [ I

I ~22 (t ) RX (t )

It follows that for a ~ t ~ b

x x x B2(t) = ~22(t)(B2(t) + R (t)B1 (t)),

C1(t) = (C~(t) -C;(t)Rx (t))Mt)-1,

Next we use (cf. (3.5)) that

net) = (-U(t)-1B(t)C(t)U(t))~(t), a ~ t ~ b.

From this identity one may conclude that

~(t) = - (B1 (t) + R(t)B2(t) )C1 (t)!J.(t), a ~ t ~ b,

~22(t) = -B2(t)(C2(t) -C1(t)R(t))rl22 (t), a ~ t ~ b.

Also, note that fJ.(a) = I and rl22(a) = I. It is now clear that

(~(e))X = . -1 b (!J.(t)!J.(t) ,B1(t) +R(t)B2(t),-C1(t);I- P11,P11)a '

x (~(e)) = . -1 b

(rl22(t)rl22(t) ,B2(t),C1 (t)R(t) - C2(t);I - P22,P22 )a

Page 212: Constructive Methods of Wiener-Hopf Factorization

202 Gohberg and Kaashoek

It follows that ~(t)-l, a $ t $ b, provides a similarity transformation

between (in(e))x and ~I_n(ex) and ~22(t)-1, a $ t $ b, provides a similarity

transformation between (~n(e))x and iI_n(ex). 'rhis proves formula (8.1) and

the "only if part" of the theorem. To prove the "if part" one applies the foregoing to I - n and eX. 0

III. PROOFS OF THE MAm THEOREMS

In this chapter we prove the results stated in Chapter I. The proofs

are based on the cascade decomposition theory developed in the previous

chapter. The last section is devoted to causal/anticausal decompositions of

systems.

111.1 A factorization lemma

1.1. LEMMA. Let n be a decomposing projection for the boundary va~ue

system e = (A(t),B(t),C(t);~,N2)~' and ~et T, Tl and T2 be the input/output

maps of e, In(e) and ~n(e), respective~y. Then T = T1T2 and the kerne~s of Tl

and T2 are given by

( 1.1)

( 1.2)

f C(t)U(t)(I - P)(I - n(s) )U(s)-lB(s),

kl (t,s) = l-C(t)U(t)P(I _ n(s) )U(s)-lB(s),

a $ s < t $ b,

a $ t < s $ b,

1 C(t)U(t)IT(t)(I - P)U(s)-lB(s), k2(t,s) = 1

-C(t)U(t)IT(t)PU(s)- B(s),

a ~ s < t ~ b,

a $ t < s $ b,

respective~y. Here

(1. 3) a $ t $ b.

Furthermore. U(t) and UX(t) denote the fundamenta~ operators of 8 and the in­verse system eX, respective~y. and P is the canonica~ boundary va~ue operator

of 8.

PROOF. Since n is a decomposing projection, the operator net) is

well-defined. Note that

net) = nn(t) + (I - n)n(t)

IT + (I - IT)U(t)-lUx(t)IT(1 - IT +tJ(t)-lUX(t)IT)-l rr

= IT - (I - TI)(1 - IT + U(t)-lUx(t)IT)-lIT = IT - R(t)IT,

Page 213: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization

with R(t) = (I - II)(I - II + U(t)-lUx(t)IIf1II. It follows that

(I - II(s) )U(sf1B(s) = (I - II + R(s)II)U(sf1B(S),

C(t)U(t)II(t) = C(t)U(t)(II-R(t)II).

203

Note that R(·) is precisely the function R(') appearing in the definitions of

i II (8) and ~II(8). The fact that II is a decomposing projection for 8 also

implies that

(I-II)P(I-II) = P(I-II), IIPII = lIP.

Using these observations and formula (11.1.4) it is now easily checked that

the kernels of Tl and T2 are given by (1.1) and (1.2), respectively. From

formula (II.4.3) it follows that T = T1T2. 0

111.2 Minimal factorization (2)

In this section the results presented in Section 1.1 and 1.2 are

proved.

Let T be a (SK)-integral operator on L2([a,b],Y). A boundary value

system 8 is called a realization of T if the input/output map of 8 is equal to

T. A realization 8 of T is said to be a minimal realization of T if among all

realizations of T the state space dimension of 8 is as small as possible. Of

course, minimal realizations are irreducible systems; the converse is not true

(see [10]). It may happen that a (SK)-integral operator has various non-simi­lar minimal realizations (see [10], §4).

Let us describe the connections between (minimal) realizations and

(minimal) representations (as defined in § I.l). Given a representation I:;. = (B(·),C(·);P) of T we define 8(1:;.) to be the system

(2.1) 8(1:;.) = (O,B(t),C(t);I-P,P)~.

Obviously, 8(1:;.) is a realization of T, the state space of e(l:;.) is the internal

space of I:;. and the canonical boundary value operator of 8(1:;.) is equal to the

internal operator of 1:;.. Conversely, given a realization 8 = (A(t),B(t),C(t); b N1,N2)a of T we set

(2.2) I:;. (e) = (U(.)-lB(.),C(')U(');P),

where U( t) is the fundamental operator of e and P is the canonical boundary

Page 214: Constructive Methods of Wiener-Hopf Factorization

204 Gehberg and Kaashoek

value operator of 8. The triple ~(8) is a representation of T. Note that

(2.3)

In the first part of (2.3) we may replace ~ by = whenever the main coefficient

of 8 is equal to zero (almost everywhere on [a,b]).

From (2.3) it follows that 8 is a minimal realization of T if and only if ~(8) is a minimal representation of T, and also ~ is a minimal

representation of T if and only if 8(~) is a minimal realization of T. In

particular, we see that the degree oCT) of T is equal to the dimension of the

state spaae of a minimal realization of T.

PROOF of Lemma 1.1.1. For v = 1,2 let 8v be a minimal realization of

Tv' Let Xv denote the state space of 8v' Thus o(Tv ) = Xv (v = 1,2). The input/

output map of 8182 is equal to T1T2 ([5], Theorem 11.1.1). Thus T1T2 is a (SK)-integral operator. Since Xl $ X2 is the state space of 8182, we have

which proves the lemma. D

We shall use the following lemma in the proof of Theorem 1.2.1.

2.1. LEMMA. Let IT be a decomposing projection of the boundary value

system a, and let T, T1 and T2 be the input/output maps of a, lIT(a) and ~(a), respectively. Then a is a minimal realization of T if and only if ~(a) and

~rr(a) are minimal realizations of T1 and T2, respectively, and T = T1T2 is a minimal factorization.

PROOF. Assume a is a minimal realization of T. Formula (11.4.3)

implies T = Tl T2. Since Ker IT is the state space of lIT ( a), we have o(T1) s dimKerIT. Similarly, o(T2)sdimImIT. Thus (cf. (I.1.5))

oCT) s O(T1) + O(T2) s dimKerIT + dimImIT = oCT).

Here the last equality is a consequence of the fact that a is a minimal

realization of T. It follows that (I.2.1) holds, o(T1) = dimKerIT and O(T2) = dim Im IT. Thus T = T1T2 is a minimal factorization and lIT(a) and ~IT(a) are mini­mal realizations of T1 and T2, respectively.

To prove the converse, assume that lITca) and ~IT(a) are minimal realizations of T1 and T2, respectively, and let T = T1T2 be a minimal

Page 215: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 205

factorization. Then

o(T) = o(T1) + O(T2) = dimKerIl + dimImIl.

So o(T) is equal to the state space dimension of 8, which implies that 8 is a

minimal realization of T. D

PROOF of Theorem 1.2.1. Let 6 = (B(' ),C(· );P) be a minimal represen­

tation of T, and let ~(t) be the fundamental operator of 6. Assume II is a

decorrposing proj ection for 6. Consider the system 8 (6) defined by (2.1). 'rhe

fundamental operator U(t) of 8(6) is identically equal to I and the funda­

mental operator UX(t) of the inverse system 8(6)x is precisely ~(t). It follows

that II is a decorrposing projection for 8(6). Let T1 and T2 be the input/output

maps corresponding to i Il (8(6)) and ~I1(8(~)), respectively. Since 8(~) is a

minimal realization of T, Lemma 2.1 implies that T = T1T2 is a minimal

factorization. From Lemma 1.1 it follows that the kernels of T1 and T2 are

given by the formulas (1.2.3) and (1.2.4), respectively. This proves the first

part of the theorem.

To prove the second part, let T = T1T2 be a minimal factorization.

We want to construct a minimal representation of T that generates the factori­

zation T = T1T2. For v = 1,2 let 8v be a minimal realization of Tv' Let Xv be

the state space of 8v (v = 1,2). Put e = 8182, Obviously, the projection

n ~ [: :] ,X, • X2 + X, • X,

is a factorization projection (see §II.6) for 8 = 8182 and prl _n(8) = 81,

prn(8) = 82, Next, apply Proposition 11.6.2. We conclude that n is a decom­

posing projection for 8 and

(2.4)

It follows that i n(8) is a minimal realization of T1 and ~n(8) is a minimal

realization of T2. So, by Lemma 2.1, the system 8 is a minimal realization of

T. Now, put 60 = 6 (8) (see formula (2.2)). Then 60 is a minimal representation -1 x x of T. The fundamental operatoI' of 60 is U ( t ) U (t), where U ( t) and U (t) are

the fundamental operators of 8 and 8x , respectively (cf. formula (11.3.5)).

It follows that II is a decorrposing projection for 60, Using Lemma 1.1 one sees

that T = T1T2 is the corresponding factorization. So 60 is a minirral represen-

tation of T which generates the factorization T = T1T2. D

Page 216: Constructive Methods of Wiener-Hopf Factorization

206 Gohberg and Kaashoek

Let TO' T1 , T2 , T3 and T4 be the integral operators on 12[0,1]

appearing in formula (1.2.6). We shall now prove that TO does not have a mini­

mal representation that generates the minimal factorizations

(2.5)

To do this consider the following two systems:

, (1 1) ; ° ° ]11 ° 1 /0'

The canonical boundary value operators of e and e* are given by

P e = [0 0], P * = [1 -1] Ole ° 3

Note that Pe and Pe* are not similar. In fact, Pe is a projection and Pe* is

not. It follows that e and e* are not similar.

From what we proved in §I.2 it is clear that e and e* are minimal

realizations orTe. Furthermore, the projection

which acts on ¢2, is a decomposing projection for both e and e*. The left and

right factors of e and e* corresponding to rr are given by:

lrr(e) -1 1 = (0,(1 + t) ,1;1,0)0'

Itrr(e) -1 1 = (0,-1,(1 +t) ;0,1)0'

lrr(e*) -1 1 = (0,(1 + t) ,1;0,1)0'

Itrr(e*) -1 1 = (0,-1,(1 +t) ;-2,3)0'

Note that the input/output maps of lrr(e), Itrr(e), lrr(e*) m1d Itrr(e*) are the operators T1 , T2, T3 and T4, respectively.

For j = 1,2,3 and 4 the operator T. has the property that all its J

Page 217: Constructive Methods of Wiener-Hopf Factorization

Minimal iactorization 207

minimal realizations are similar. We shall give the proof of this statement

for the operator T4 (for the other operators one can use analogous arguments).

Let 80 be a minimal realization of T4. We want to show that 80 '" fLn( 8*). With­

out loss of generality we may assume that 80 = (O,b(t),C(t);l-P'P)~. Since

T80 = T4 we have

(2.6) c(t)(l-p)b(s) = 2(1+t)-1, 0 $ s < t $ 1,

(2.7) -1 -c(t)pb(s) = 3(1+t), 0 $ t < s $ 1.

We can apply Theorems 3.2 and 3.3 in [8] to show that the kernel

-1 2(1 +t)-1, h(t,s) - -1

- 3(1 + t) ,

o $ s < t $ 1,

o $ t < s $ 1,

is both lower unique and upper unique. It follows that

-1 c(t)(l-p)b(s) = 2(1+t) , -1 c(t)pb(s) = -3(1+t)

almost everywhere on the full square [0,1] x [0,1]. In particular, 1 - P f. 0 and

-3(1+t)-1 = c(t)pb(s) = lP c(t)(l-p)b(s) = ~12 (l+t)-l. -p -p

Hence p = 3 and c(t)b(s) = -(l+t)-l on [0,1] x [0,1]. The latter identity

implies (see [8], §1) that there exists a non-zero constant y such that -1 -1 * bet) = y and c(t) = -y (l+t) a.e. on 0 $ t $ 1. Thus 80 '" fLnC8 ).

Now, let ~ be a minimal representation of T and assume that TIl and

TI2 are decomposing projections for ~ which yield the minimal factorizations in

(2.5). Put L = 8(~) (cf. (2.1)). Then (cf. the first paragraph of the proof

of Theorem 1.2.1) the system L is a minimal realization of T, the projections

TIl and n2 are decomposing projections for L and

Since L is a minimal realization of T, we can use Lemma 2.1 to show that

ln1(L) and fLn1 (L) are minimal realizations of Tl and T2, respectively. But then

the result of the preceding paragraph implies that in1(L) '" In(8) and

!tn (L) '" fLn (8). In an analogous way one proves that in (L) '" inC 8 *) and 1 2

!tn (L) '" !tn(8*). Next, apply Theorem 11.4.1 and [5], Proposition 11.1.3: 2

Page 218: Constructive Methods of Wiener-Hopf Factorization

208 Gohberg and Kaashoek

e ~ ~(e)krr(e) ~ Irr1(E)krr1(E) ~ E;

e* ~ lrr(e*)krr(e*) ~ Irr2(E)krr2 (E) ~ E.

So e ~ e*, which is a contradiction. Hence T does not have a single minimal

representation that yields both minimal factorizations in (2.5).

111.3 SB-minimal factorization (2)

In this section we prove the following theorem, which is the ana­

logue of Theorem 1.2.1 for SB-minimal factorization (cf. §1.5).

3.1. THEOREM. Let T be a (SK)-integraZ operator on L2([a,b],Y), and

Zet ~ = (B(.),C(.),P) be a SE-minimal representation of T. If TI is a decom­

posing projection for ~, then a SE-minimal factorization T = T1T2 is obtained

by taking Tl and T2 to be the (SK)-integral operators on L2([a,b],Y) of which

the kerne!s are given by

(3.1) 1 C(t)(1 - P)(1 - TI(s) )B(s), a $ s < t $ b,

k1(t,s) = -C(t)P(1 - TI(s) )B(s), a $ t < s $ b,

1 c(t)n(t)(I-P)B(e), a $ s < t $ b, k2(t,s) =

-C(t)I1(t)PB(s), a $ t < s $ b, (3.2)

respectively. Here

TI(t) = ~(t)rr(1 - II + ~(t)I1)-lTI, a $ t $ b,

with ~(t) being the fundamental operator of ~.

Furthermore, if ~ runs over all possible SE-minimal representations

of T, then all SE-minimal factorizations of T are obtained in this way:

We prove Theorem 3.1 by reducing the theorem to statements about

systems with separable boundary conditions. Recall (see [8], § 7) that a boundary

value system e is said to have separable boundary conditions if and only if

its canonical boundary value operator Pe is a projection. In that case e is

called a SE-system. We shall need the following lemma.

3.2. LEMMA. Let 8 ~ 8182 be a cascade decomposition. Then 8 is a

SE-system if and only if the compounding systems 81 and e2 are SE-systems.

PROOF. Since the class of SB-systems is preserved under similarity

Page 219: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 209

we may assume that 6 = 6162. But then we can use the proof of Theorem 11.1.1

in [5] to show that the canonical boundary value operator P of 8 has the

following form ([5], formula (11.1.5)):

0.4) P = [P1 P1Q(I - P2) ]. o P2

Here P1 and P2 are the canonical boundary value operators of 61 and 62, res­

pectively, and Q is an operator which we shall not specify. From (3.4) it is

clear that P is a projection if and only if P1 and P2 are projections, which

proves the lemma. 0

Let T be a (SK)-integral operator on L2([a,b],Y). A system 6 is

called a SE-realization of T if 6 is a SB-system and T is equal to the input/

output map of 6. We say that 6 is a SE-minimal realization if among all SB­

realizations of T the state space dimension of 6 is as small as possible.

Since the class of SB-systems is closed under similarity and reduction, a SB­

minimal realization is automatically irreducible. (Note that SB-minimality as

defined in [8], §7 does not effect the external coefficient, and hence a SB­

realization of a (SK)-integral operator is SB-minimal in the sense of [8] if

and only if it is SB-minimal in the sense defined here.)

If tJ. is a SB-representation of T (see § 1.5), then the system 8 (tJ.)

(defined by (2.1)) is a SB-realization of T. Conversely, if e is a SB-reali­

zation of T, then tJ.(6) (see (2.2)) is a SB-representation of T. It follows

that tJ. is a SB-minimal representation of T if and only if 8(tJ.) is a SB-minimal realization of T, and, similarly, 8 is a SB-minimal realization of T if and

only if tJ.(6) is a SB-minimal representation of T. In particular, the SE-degree

E(T) of T is equal to the dimension of the state space of a SE-minimal reali­

zation of T. The next lemma concerns formula (1.5.2).

3.3. LEMMA. If T = T1T2 is the product of two (SK)-integraZ operators on L2([a,b],Y), then

PROOF. For v = 1,2 let 6 be a SB-minimal realization of T . Let Xv v v be the state space of 8v (v = 1,2). Then Xl $ X2 is the state space of 8182 and dT) = dimXv (v = 1,2). From Lemn 3.2 and [5], Theorem II.1.1 it follows

that 6182 is a SB-realization of T. Thus

Page 220: Constructive Methods of Wiener-Hopf Factorization

210 Gohberg and Kaashoek

o

The next lemma is the analogue of Lemma 2.1 for SB-systems. Recall

that T = T1T2 is a SB-minimal factorization if and only if

3.4. LEMMA. Let IT be a decomposing projection for the BB-system a, and Let T, T1 and T2 be the input/output maps of a, in(a) and ~n(a), res­pectiveLy. Then a is a BB-minimaL reaLization ofT if and onLy if in(a) and

~n(a) are BB-minimaL reaLizations of T1 and T2, respectiveLy, and T = T1T2 is

a BB-minimaL factorization.

PROOF. Assume a is a SB-rninimal realization of T. Formula (II. 4.3)

implies that T = T1T2. From Lemma 3.2 we may conclude that In(a) and ~n(a)

are SB-systems. Thus £(T1) ::;; dimKern and £(T2) ::;; dimImn. According to 0.5)

this implies that

where X is the state space of e. Since e is a SB-realization of T, we have

seT) = dimX. Hence 0.6) holds, £(T1) = dimKer n and £(T2) = dim Im n, which proves the "only if part" of the lemma.

To prove the "if part", assume that ~(e) and ~rr( e) are SB-minimal

realizations of T1 and T2, respectively, and let T = T1T2 be a SB-minirral factorization. Then

So seT) is equal to the state space dimension of e. Hence a is a SB-minimal

realization of T. 0

PROOF of Theorem 3.1. Let ~ = (B(·),C(·);P) be a SB-minimal repre­

sentation of T, and let n be a decomposing projection for ~. Then the system

e(~) (defined by (2.1)) is a SB-rninimal realization of T and n is a decom­

posing projection for a(~). Let T1 and T2 be the input/output maps corres­

ponding to ~(a(~)) and ~n(a(~)), respectively. Lemma 3.4 implies that T = T1T2 is a SB-rninimal realization. From Lemma 1.1 it follows that the kernels

of T1 and T2 are given by the formulas (3.1) and (3.2), respectively. This proves the first part of the theorem.

Page 221: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 211

To prove the second part, let T = T1T2 be a SB-minimal factorization.

For v = 1,2 let 6v be a SB-minimal realization of Tv' Put 6 = 6162, Then 6 is

a SB-system (by Lemma 3.2). Now proceed as in the proof of Theorem 1.2.1 (see

§111.2). So there exists a decomposing projection rr for 6 such that (2.4)

holds true. It follows that lrr(6) and nrr (6) are SB-minimal realizations of Tl

and T2, respectively. Thus 0 is a SB-minimal realization of T (by Lemma 3.4).

Now, put ~O = ~(6) (see formula (2.2)). Then ~O is a SB-minimal representation

of T, the projection rr is a decomposing projection for ~O and using Lemma 1.1

one sees that T = T1T2 is the corresponding factorization. Thus ~O is a SB­

minimal representation of T which generates the factorization T = T1T2. 0

111.4 Proof of Theorem 1.6.1

To prove Theorem 1.6.1 we shall need Lemma 4.1 below. Let k be a

semi-separable kernel on [a,b] x [a,b]. Thus k is the kernel of an integral

operator on L2([a,b],Y) and k admits a representation of the form (1.1.2). For

a < y < b we let k and kY denote the following restrictions: Y

ky(t,S) = k(t,s), y ~ t ~ b,

kY(t,s) = k(t,s), a ~ t ~ y,

a ~ s ~ y,

y ~ s ~ b.

Since k is semi-separable, the kernels k and kY are finite rank kernels. y Given a finite rank kernel h we denote by rank (h) the rank of the corres-

ponding integral operator (see [8], §1).

4.1. LElIIlMA. Let kl and k2 be semi-separable kernels on [a,b] x [a,b].

Put

b (4.1) k(t,s):= k1(t,s) + k2(t,s) + f k1(t,a)k2(a,s)da.

a

Then k is again a semi-separable kernel and for a < y < b

(4.2) rank (k ) ~ rank ( (k1) ) + rank ( (k2) ), y y y

rank (k y) ~ rank (ki) + rank (k;) .

PROOF. The semi-separability of k follows from [5], Theorem 11.1.1.

We prove (4.2). For v = 1,2 let

Page 222: Constructive Methods of Wiener-Hopf Factorization

212

Here

· XCv) -+ y · 1 '

· XCv) -+ y · 2 '

Gohberg and Kaashoek

a ~ s < t ~ b,

a ~ t < s ~ b.

y -+ XCv) 1 '

y -+ XCv) 2

are linear operators acting between finite dimensional linear spaces, which

as functions of t are square integrable on a ~ t ~ b. Take a ~ s < y < t ~ b.

Then

Put

H(s)

K(t)

Then

(4.4) a ~ s < y < t ~ b.

Introduce the following auxiliary operators

rv : L2([a,y],Y) -+ xiv),

ltv : x~ v) -+ 1,2 ( [y , b] , Y) ,

r : L2([a,y],Y) -+ X~l),

rv~ = J GiV)(s)~(s)ds; a

(It x)(t) ~ F(v)(t)x' v 1 '

Y r~ = J H(s)q>(s)ds;

a

Page 223: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 213

(Ax)(t) = K(t)x.

We have (cf. (4.4»:

(4.6) rank «k ) ) = rank]\ r, v = 1,2. v Y v v

We shall prove that

Ker]\2 c Ker]\.

Let rr be a projection of xii) along Imr1. So rrri = O. This implies

that rrGi1)(s) = 0 a.e. on ass s y. It follows that rrH(s) = 0 a.e. on

ass s y, and hence rrr = o. We see that Im r c Ker rr = Im r 1. Next, take

x E Ker]\2. So FF)(t)x = 0 a.e. on y s t s b. It follows that K(t)x = 0 a.e.

on y s t s b. Thus Ax = 0, and hence the second inclusion in (4.7) holds true.

Now, use (4.7) in (4.5). We get

rank (ky) s rank (]\1r) + rank (Ar2) s rank (]\lrl) + rank (]\2r2)

= rank «k1\) + rank «k2\).

This proves (4.2). Formula (4.3) is proved in an analogous way. 0

Let k be the semi-separable kernel of the (SK)-integral operator T.

In what follows l(k) and u(k) denote the lower and upper order of k as defined

in [8], §4. If ~ = (B(·),C(·);P) is a SB-minimal representation of T, then

(cf. the proof of Theorem 7.1 in [8])

(4.8) l(k) = dimKerP, u(k) = dimImP.

It follows that the SB-degree E(T) of T is given by

(4.9) dT) = l(k) + u(k),

which is just another form of formula (1.5.1).

PROOF of Theorem 1.6.1. Part (i) of the theorem is contained in

Theorem 3.1. So we have to prove part (ii). Let T be in the class (USB), and

let T = T1T2 be a SB-minimal factorization. First we prove that Tl and T2 are

in the class (USB) .. Let k, kl and k2 be the kernels of T, Tl and T2, respec-

Page 224: Constructive Methods of Wiener-Hopf Factorization

214 Gohberg and Kaashoek

tively. Then k, k1 and k2 are related as in (4.1). We have

(4.10)

Indeed, from (4.8) and (3.4) it follows that l(k) ~ l(k1) + l(k2) and u(k) ~

U(k1) + u(k2 ). Since T = T1T2 is a SB-minimal factorization we know that

£(T) = £(T1) + £(T2). Now use (4.9) and the two identities in (4.10) are clear.

The fact that T belongs to the class (USB) is equivalent to the statement that

k is lower unique and upper unique (see [8]), and hence it follows (see [12])

that

a < y < b.

Fix a < y < b, and apply Lemma 4.1. We get

l(k) = rank (ky) ~ rank ((k1\) + rank ((k2\)

~ l(k1) + l(k2 ) = l(k);

It follows that for v = 1,2

rank ((k ) ) = l(k ), v y v a < y < b,

a < y < b.

But then we can apply [8], Theorems 3.2 and 3.3 to show that kv is lower

unique and upper unique. So Tl and T2 are in the class (USB).

Next, let 6 be a SB-minimal representation of T. We have to show

that there exists a unique decomposing projection rr for 6 that yields the

factorization T = T1T2. Put 80 = 8(6) (see (2.1». Then 80 is a SB-minimal

realization of T. It suffices to show that 80 has a unique decomposing pro­

jection IT such that T1 is the input/output map of lrr(8 0) and T2 is the input/

output map of ftrr (8 0)' He already know (see the proof of the second part of

Theorem 3.1) that there exists a SB-minimal realization 8 of T and a decom­

posing projection T for 8 such that Tl is the input/output map of IT(8) and

T2 is the input/output of ftT(8). Since T is the class (USB), the SB-minimal

representations 6 and 6(8) of T are similar. But then

Page 225: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 215

Thus So ~ 6, and we can apply Proposition 11.6.1. So there exists a decom­

posing projection n for 60 such that

It follows that T1 and T2 are the input/output maps of -ilr(60) and'1r(60)' respectively. It remains to prove the uniqueness of n.

Let p be a second decomposing projection for 60 that yields the

factorization T = T1T2. Thus T1 is the input/output map of {p(60) and T2 is

the input/output map of ~p(So). From Lemma 3.4 we know that {n(So) and {p(60)

are SB-minimal realizations of T1 and ~n(So) and ~p(80) are SB-minimal reali­zations of T2• Now use that T1 and T2 are in the class (USB). So

(4.11)

In other words the cascade decompositions 60 ~ {n(60)~n(So) and

So ~ {p(SO)~p(So) are equivalent. Now recall that the class of SB-systems is

closed under similarity and reduction. So the SB-minimal system 60 is irre-

ducible. But then we can apply Theorem 11.4.3 to show that p = n. 0

111.5 Minimal factorization of Volterra integral operators (2)

In this section we prove the results announced in §I.3. We begin

with a lenrna.

5.1. LEMMA. Let T be a (SK)-integpal opepatop on L2([a,b],Y) in the

class (Ae), and let ~ = (B(·),C(·),P) be a minimal peppesentation of T. Then

B(·) and C(.) ape analytic on [a,b] and P = O. In papticulap, £(T) = oCT).

PROOF. Let k be the kernel of T. Since ~ is a representation of T, we have

( 5. 1 ) C ( t ) (I - P) B( s) = k( t , s ) , a:5 s < t :5 b,

(5.2) C(t)PB(s) = 0, a :5 t < S :5 b.

Let X be the internal space of ~, and put Xi = 1m ( I - P). Introduce

B1 (t) = (I - P)B(t) : Y "* Xi'

Page 226: Constructive Methods of Wiener-Hopf Factorization

216 Gohberg and Kaashoek

'l'hen III = (B1 ( . ) ,C1 ( . );0) is a SB-representation of T, and hence £(T) 5 dim Xl"

Since II is a minimal representation of T, we have oCT) = dimX. So

O(T) = dimX ~ dimX1 ~ £(T) ~ o(T).

We conclude that £(T) = oCT) and Xl = X. In particular, 1- P is invertible and

Z = ((I-P)B(·),C(·);O) is a SB-minimal representation of T.

According to our hypotheses the kernel k admits a representation

of the form:

k(t,s) = F(t)G(s), a 5 s 5 t 5 b,

with F(·) and G(·) analytic on [a,b]. Without loss of generality (cf. [8], §1)

we may assume that the operators

b I P(t)*F(t)dt, a

b I G(t)G(t)*dt a

are invertible. It follows that lIo = (G(·),F(.);O) is a SB-minimal realization

of T. So Z and lIO are two SB-minimal representations of T. But TE (AC) c (USB),

and thus lIO and Z are similar. So there exists an invertible operator S such

that (I - P)B(t) = SG(t) and C(t) = F(t)S-l a.e. on a 5 t 5 b. According to

(5.2) this implies that

for a 5 t < S 5 b. But F(·) and G(.) are analytic on [a,b]. Hence (5.4) holds

for all (t,s) in [~,b] x [a,b]. Now use that the operators (5.3) are invertible. We obtain S-lp(I - p)-lS = O. Thus P = O. Furthermore, B(·) = SGC·) and C(·) = F(.)S-l are analytic on [a,b]. D

Note that in l€mma 5.1 and its proof we used the convention that a

function in L2[a,b] is analytic on [a,b] if one of the elements in its equi­

valence class is analytic.

In Lemma 5.1 the analyticity condition on the kernel k of T plays an

essential role. Without this analyticity condition it may happen that a

Volterra (SK)-integral operator T has a minimal representation with a non-zero

internal operator; in fact, this may happen even if the kernel k of T is lower

unique. For example, take

05S5t51.

Page 227: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 217

Then kO is lower unique (because of [8], Theorem 3.2) and

is a minimal representation of the corresponding integral operator TO' Note

that the internal operator of ~O is non-zero. In terms of realizations this

example shows that a Volterra (SK)-integral operator with a lower unique ker­

nel may have a non-causal minimal realization.

PROOF of Theorem 1. 3 .1. Part (i) of the theorem is a straightforward

application of the first part of Theorem 1.2.1 (where one only has to take

P = 0). Because of LeJlJll8. 5.1 the functions B(.) and C ( .) in the minimal re­

presentation ~ are analytic on [a,b]. It follows that the fundamental operator

n(·) of ~ is analytic on [a,b]. Hence the integral operators T1 and T2 given

by (1.3.6) and (1.3.7) are in the class (AC).

To prove part (ii), let T be a (SK)-integral operator in the class

(AC), and let T = T1T2 be a minimal factorization of T. First we prove that

T1 and T2 are in the class (AC). From the second part of Theorem 1.2.1 we know

that there exist a minimal representation ~O = (BO(')'Co(');Po) of T and a

decomposing projection TIO for ~O which yields the factorization T = T1T2.

According to LeJlJll8. 5. 1 the internal operator Po = 0, and hence we can apply

part (i) to ~O and TIO' So the remark made at the end of the previous paragraph

implies that T1 and T2 are in the class (AC).

Since T = T1T2 is a minimal factorization with factors in the class (Ae), we have (cf. Lemma 5.1)

Hence T = T1T2 is also a SB-minimal factorization. Next, observe that T E (AC) c (USB). So we can apply part (ii) of Theorem 1.6.1 (which is proved

in the previous section) to finish the proof. D

5.2. COROLLARY. Let T be in the class (Ae). Then T = T1T2 is a mini­

mal factorization if and only if T = T1T2 is a SB-minimal factorization.

PROOF. The proof of the "only if part" is given by (5.5). To prove

the "if part", let T = T1T2 be a SB-minimal factorization. Then there exist

(see the second part of Theorem 111.3.1) a SB-minimal representation ~ of T

and a decomposing projection TI for ~ which yields the factorization T = T1T2.

By Lemma 5.1 the representation ~ is a minimal representation of T. So

Page 228: Constructive Methods of Wiener-Hopf Factorization

218 Gohberg and Kaashoek

Theorem I.2.1 (i) implies that T = T1T2 is a minimal factorization. 0

Let T be the integral operator defined by formula (I.3.9), and for

\} = 1,2 put

t (TlV)tp)(t) = tp(t) + b (1 +r)s)g)s»\p(s)ds,

(v) t (T2 tp)(t) = tp(t) + b (X[O,t)(t) - rv(t»~(s)tp(s)dS,

where the functions r v ( .) and ~ ( .) are as in § I. 3. We already know that

(5.6)

and these two factorizations are minimal factorizations. We shall now prove

that T does not have a minimal representation that generates the minimal

factorizations in (5.6). For v = 1,2 let 8v be the following causal system

Thus 8v = 8(~v)' where ~v is given by (I.3.12). It follows that 81 and 82 are

minimal realizations of the operator T (given by (I. 3.9) ). We know that for g = g1 the function r 1(·) is a solution of the RDE (I.3.11) and for g = g2 the function r 2(·) is a solution of (I.3.11). This implies (cf. Lemma II.3.1) that

the proj ection

rr = [: : 1 : ¢2 ~ ¢2

is a decomposing projection for both 81 arn 82, The corresponding left and

right factors are given by:

-l1I(81) = (0,1 + r 1 (t)g1 (t) ,1;1,0)~ ,

Itrr (81) = 1 ( ° ,g1 (t) , -r 1 (t) + X [0, n (t) ; 1, 0) ° ' ~(82)

1 = (0,1 +r2(t)g2(t),1;1,0)0'

Itrr(82) = 1 (0,g2(t),-r2(t) +X[O,~)(t);l,O)O'

For v = 1,2 the operator T~v) is the input/output map of -l1I(8v ) and T1v ) is

Page 229: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 219

the input/output map of ~n(6v).

For v = 1,2 and j = 1,2 the integral operator T~v) has no non-simi­

lar minimal realizations. We shall prove this for Til). L~t 60 be an arbitrary

minimal realization of Til). We want to show that 60 ~ i IT (6 1). Without loss

of generality (apply a similarity if necessary) we may assume that 60 = (0,

b(t),C(t);l-P'P)~. Since T60 = Til) we have

o ~ s ~ t ~ 1, a.e.,

(5.8) c(t)pb(s) = 0, o ~ t < s ~ 1, a.e ..

Now use that the kernel k1(t,s) = 1 + r 1 (s )gl (s), 0 ~ t ~ 1, 0 ~ s ~ 1, is

lower unique. So there exists a non-zero constant y such that

(5.9) c(t)(1-p)y, 1 ~ 1, 1 = 1 +r1(t)gl(t) = :yb(t), o ~ t a.e ..

Assume p t O. Then (5.9) and (5.8) imply that

1 + r 1 ( s ) gl ( s) = e ~p) c( t )p b (s) = 0, o ~ t < s ~ 1, a.e.,

which is impossible. So p = O. But then (5.9) shows that 60 ~ i n(6 1).

Now, let 6 be a minimal realization of T, and assume that there

exist decomposing projections ITl and IT2 for e such that for v = 1,2 the

operator Tiv) is the input/output map of i nv (6) and T~v) is the input/output

map of ~n (6). Since 6 is a minimal realization of T, we can apply Lemma 2.1

to show t~at irrv(e) and ~rrv(e) are minimal realizations of T~v) and T~v), respectively (v = 1,2). But then the result of the preceding paragraph shows

that

~(el) ~ i IT1 (6),

~(e2) ~ i rr2 (6),

~n(el) ~ ~IT1(e),

~rr(62) ~ ~rr2(e),

and thus 6 ~ i n(6 )~n(6 ) ~ in (6)~IT (6) ~ 6 for v = 1,2. In particular, v v v v v 61 ~ 62. Contradiction. Thus there is no single minimal realization e of T

which has decomposing projections that yield the minimal factorizations in

(5.6), and hence T does not have a minimal representation that generates the

minimal factorizations in (5.6).

Page 230: Constructive Methods of Wiener-Hopf Factorization

220 Gohberg and Kaashoek

111.6 Proof of Theorem 1.4.1 In this section we shall prove Theorem 1.4.1. We begin with two

lemmas.

6.1. LEMMA. A minimal representation ~ of a (SK)-integral operator

on L2([a,b],Y) in the class (STC) is of the form

(6.1) ~ = (e-(t-a)AB,ce(t-a)A;O).

PROOF. Let T belong the class (STC). Since the class (STC) is con­

tained in the class (AC), the degree of T is equal to the SB-degree of T

(Lemma 5.1). From formula (1.4.1) it is clear that T has a representation of

the form (6.1). Moreover, without loss of generality we may assume that

(6.2) n . -1 1 ) n Ker CAJ = (0), X = Im (B AB ... Jln- B . j=l

Here n is the dimension of the internal space X of ~. The identities in (6.2)

imply that the representation (6.1) is a SB-minimal representation. So E(T) = dimX. But O(T) = dT). It follows that T has a minimal representation of the

form (6.1).

Next, assume that ~ in (6.1) and ~o = (B(.),C(.);P) are

minimal representations of T. Since (STC) c (AC), Lemma 5.1 implies that P = 0 and both 11 and 110 are SB-rninimal representations of T. But (srC) c (AC) c (USB).

It follows that 11 and ~O are similar, and so there exists an invertible

operator S such that

B(t) = Se-(t-a)AB, C(t) = Ce(t-a)As .

-1 -1 -(t-a)Ao (t-a)AO Put AO = SAS ,BO = SB and Co = CS . Then ~O = (e BO,COe ;0) and the lemma is proved.' 0

6.2. LEMMA. Let T be a (SK)-integral operator on L2([a,b],Y) in the class (STC), and let 11 = (e-(t-a)AB,ce(t-a)A;O) be a minimal representation of

T. Then IT is a supporting projection for ~ if and only if IT is a decomposing

projection for ~ and the corresponding factorization T = T1T2 has factors T1

and T2 in the class (STC).

PROOF. Consider the time invariant causal system 8 = (A,B,C;I,O)b. a

Note that 11 = 11(8) (cf. formula (2.2». Let II be a supporting projection for

~. Then IT is a factorization projection for 8, and we can apply Proposition

Page 231: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 221

11.6.2 to show that n is a decomposing projection for 8. Formula (11.6.13)

implies that the input/output maps of In(8) and kn(8) are input/output maps

of time invariant causal systems and hence belong to the class (S1'C). It

follows that the factorization T = T1T2 generated by n and ~ has its factors in

in the class C srC). This proves the "only if part" of the lemma.

To prove the "if part", let n be a decomposing projection for ~ and

assume that the corresponding factorization T = T1T2 has its factors in the

class CS1'C). Then n is a decomposing projection for the system 8. Furthermore

T1 and T2 are the input/output maps of In(8) and kn(8), respectively. For -(t-a)Av (t-a)Av v = 1,2 let 8 = (e B ,Ce ;0) be a minimal representation of T

v v b v and consider 8v = (Av,Bv,Cv;I,O)a' Let Xv be the state space of 8v (v = 1,2).

Put 80 = 8182, Note that 80 is a time invariant causal system. From Lemma 2.1

it follows that 80 is a minimal realization of T. Furthermore, the projection

is a factorization projection for 80, prI _p(8 0) = 81 and prp(80) = 82 , Now 80 and 8 are both time invariant causal minimal realizations of T. It follows

that 80 and 8 are similar and a similarity between 80 and 8 is time inde­

pendent (cf. [7], §5). Put nO = SpS-1, where S is a similarity between 80 and

8. Then nO is a factorization projection for 8, and hence nO is a supporting

projection for ~. According to Proposition 11.6.1 and 11.6.2 the projection nO

is also a decomposing projection for 8 and

lno(8) ~ lp(8 0) ~ prI_p(8 0) = 81,

kno(8 ) ~ np(8 0) ~ prp(80) = 82,

So no is a decomposing projection for ~ and T = T1T2 is the corresponding

factorization. By Theorem 1.3.1 Cii) there is only one decomposing projection

for ~ which generates the factorization T = T1T2. Thus n = nO and n is a sup-

porting proj ection for ~. D

PROOF of Theorem 1.4.1. To prove part Ci), let n be a supporting

projection of the minimal representation ~ = (e-tAB,CetA;O). Put 8 = CA,B,C;I,

0) ~. Then 8 is a minimal realization of T and' n is a factorization proj ection

for 8. It is straightforward to check that the input/output maps of prI_n(8)

and prn(8) are the integral operators T1 and T2 given by CI.4.6) and CI.4.7),

Page 232: Constructive Methods of Wiener-Hopf Factorization

222 Gohberg and Kaashoek

respectively. Thus T = T1T2. Since In(e) ~ ~rI_n(e) and ~rr(e) ~ prn(e) (Proposition 1.6.2), we can apply Lemma 2.1 to show that the factorization

T = T1T2 is minimal. Obviously T1 and T2 are in the class (STC).

Replacing ~ by the representation (6.1) one sees that part (ii) of

the theorem is an immediate corollary of Theorem 1. 3.1 (ii) and Lemma 6.2. 0

then

(7.1)

111.7 A remark about minimal factorization and inversion

Let T be a (SK)-integral operator on L2([a,b],Y). If T is invertible,

-1 dT) = dT ).

To prove the first equality in (7.1), let e be a minimal realization of T.

Thus oCT) = dirnX, where X is the state space of e. The fact that T is inver­

tible implies that the inverse system eX has well-posed boundary conditions

and T-1 is the input/output map of eX (see [5], Theorem 11.2.1). Since e and

eX have the same state space, it follows that o(T-1) ~ oCT). By applying the

latter inequality to T-1 instead of T, one obtains oCT) ~ o(T-1). This proves

the first identity in (7.1); the second is proved in a similar way (one takes

e to be a SB-minimal realization of T).

7.1. COROLLARY. Let T = T1T2 be a minimal (resp. SB-minimal) factorization. If T1 and T2 are invertible, then T-1 = T;lT~l is again a mini­mal (resp. SB-minimal) factorization.

PROOF. Assume T = T1T2 is a minimal factorization. Thus oCT) = -1 6(T1) + o(T2). Now apply (7.1) to T and the factors T1 and T2. We get oCT ) =

O(T~1) + O(T;1). Hence T-1 = T;1T~1 is a minimal factorization. The SB-minimal

case is proved in the same way. 0

111.8 LU- and UL-factorizations (2)

In this section we prove the results about LU- and UL-factorization

stated in Section 1.8.

PROOF of Theorem 1.8.1. The equivalence of the statements (1) and

(5) is well-known (see [11], Section N.7). Assume that (5) holds. Put e =

e (b,) (see formuls (2. 1 ) ). Then T is the inrut/ output map of the system e and

the kernel k of T is as in (1.1.3). It follows that T is the input/output map a of the system

Page 233: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization

a aa = (O,B(t),C(t);I-P,P)a·

223

Note that ~(t) is the fundamental operator of the inverse system a: associated

with a . Since T is invertible, we can apply the results of § II. 4 in [5] to a a show that det (I - P + P~(a)) 1- 0. Equivalently, det (I - P + ~(a)P) 1- 0, and we see

that (5) implies (4).

Since ~ is a SB-representation, the operator P is a projection. Thus

(1.8.2) is a well-defined Riccati differential equation. The fundamental

operator of (1.8.2) is also equal to ~(t). It follows that the equation

(1.8.2) is solvable on a s t s b if and only if (see [17]) the operator

(8.1) Pn(t)P : ImP -+ ImP

is invertible tor each a s t s b. Furthermore, in that case the solution of

(8.1) is given by

(8.2) R(t) = -(I - P)~(t)P(P~(t)p)-l : ImP -+ Ker P.

Obviously, the operator (8.1) is invertible if and only if det (I-P+~(t)P) f. 0,

and hence the statements (3) and (4) are equivalent. Moreover we see that the

solution of (1.8.2) (assuming it exists) may also be written in the form

(I.8.6) .

From the definition of a decomposing projection it is clear that the

projection P is a decomposing projection for ~ is and only if det (I-P+~(t)P)

1- 0 for a s t s b. Hence the statements (2) and (4) are also equivalent. It r,emains to prove that (2) implies (1). Let P be a decomposing

projection for ~. Put e = e(~) (see formula (2.1)). Then P is a decomposing

projection for e and the corresponding left and right factor are given by

fp( a) = (0, (I - P + R(t )P)B(t) ,C(t) I Ker p;IKer p,O)~ ,

Itp(a) = (O,PB(t),C(t)(P-R(t)P)IImP;O,IImP'O)~.

Here R( t) : Im P -+ Ker P is the solution of the Riccati differential equation

(1.8.2), which exists because (2) is equivalent to (4) and (4) is equivalent

to (3). Next, observe that the input/output map of fp( a) is the operator I + K_

with K_ given by (1.8.4). Similarly, the input/output map of Itp(e) is equal to

1+ K+ with K+ given by (I.8.5). Since P is a decomposing projection, Theorem

II. 4. 1 implies

Page 234: Constructive Methods of Wiener-Hopf Factorization

224 Gohberg and Kaashoek

and hence T = (I+K_)(I+K+) which is the desired LU-factorization. 0

PROOF of Theorem 8.2. The equivalence of the statements (1) and (4) is well-known (see [11], Chapter IV). Take a $ S < b. Put e = e(~) (see

formula (2.1)). We know that T is the input/output map of the system e. So the

kernel k of T is as in (1.1.3). It follows that TS is the input/output map of

the system

e S = (0, B( t ) ,C ( t ) ; I - P, P) ~ •

The function Z(t) := ~(t)~(S)-1, 6 $ t $ b, is the fundamental operator of the

inverse system (eS)x associated with eS. Since TS is invertible, the results

of § II. 4 in [5] imply that

det (I-p+m(b)~(sf1) of o.

But then det ((I-P)~(S) + P~(b)) of O. Since a $ S < b is arbitrary, we see

that (4) implies (3).

Since P is a proj ection, the Riccati differential equation (1. 8.7)

is well-defined. Assume (3) holds. Then I-P+P~(b)~(t)-1 is invertible. For

a $ t $ b define R(t) by (1.8.11). A direct computation shows that R(·) is the

solution of the equation (1.8.7).

Next, assume that the equation (1.8.7) is solvable, and let R(t),

a $ t $ b, be its solution. Put Q = -R(a), and define IT = (I-P) +Q(I-P).

Since Q : Ker P -+ Im P, the operator IT is a proj ection and Ker IT = Im P. We shall

'proVe that IT is a decomposing projection for e := e(~). To do this we use

Lemma II. 3. 1. Obviously, Ker IT is invariant under P. Put

(8.4) G(t) = (R(t) + Q)(I - P)IT : ImIT -+ Ker IT, a $ t $ b.

Since (I - P)Q = 0 it is straightforward to check that

(8.5) I-IT+G(t)IT = P+R(t)(I-P),

(8.6) G( t ) IT - IT = R( t ) (I - P) - (I - P) ,

(8.7) G(a) = 0, G(b) = Q(I - P)IT.

It follows that G is the solution of the following Riccati differential

Page 235: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 225

equation:

(8.8) J G( t) =

1 G(a) = O.

(I - IT + G(t)IT)(-B(t)C(t) )(G(t)IT - IT), a ~ t ~ b,

Next, observe that

P(I - IT)G(b)IT(I - P) = Q(I - P)IT = (I - IT)PIT.

So Lemma 11.3.1 implies that IT is a decomposing projection for 8. The left and

right factor corresponding to IT are given by

b = (0, ( I - IT + G ( t ) IT ) B ( t ) ,c ( t ) I Ker II; 0 , IKer IT ) a '

= (O,ITB(t) ,C(t)(IT - G(t)II) I 1m IT;I1mIT'O)~ .

Let Tl and T2 be the input/output maps of lIT(8) and ~II(8), respectively. A

simple computation, using (8.5) and (8.6), shows that Tl = I+H+ with H+ given

by (1.8.10) and T2 = I+H_ with H_ given by (1.8.9). It follows that T = (I+H+)(I+H_), which proves (1). 0

PROOF of Corollary 8.3. Let T = T_T+ be a LU-factorization. Let 6 be

a SB-minimal representation of T with external operator P. From Theorem 1.8.1

we know that P is a decomposing projection for 6 and (because of the unicity

of the LU-fact orizat ion ) T = T _ T + is the corresponding factorization. Sence 6

is SB-minimal, it follows (cf. Theorem 3.1) that T = T T is a SB-rninimal - + factorization.

-1 -1 -1 Next, let T = T+T_ be a UL-factorization. Then T = T_ T+ and this factorization is a LU-factorization. So, by the result of the preceding para­

graph, the factorization T-1 = T=lT:l is SB-rninimal. Now apply Corollary 7.1

to T-1. We conclude that T = T+T is also a SB-rninimal factorization. 0

111.9 Causal/anticausal decompositions

A boundary value system 8 is said to admit a ca~sal/anticausal

decomposition if 8 '" 8_8+ with 8_ a causal system and 8+ an anti-causal system.

Note that 8_ and 8+ are SB-systems; in fact, the canonical boundary value

operator of 8_ is the zero operator llild for 8+ this operator is the

identity operator. ThUS, if 8 admits a causal/anticausal decomposition, then

8 must be a SB-system (cf. Lemma 3.2). The following theorem is the system

theoretical version of Theorem 1.8.1.

Page 236: Constructive Methods of Wiener-Hopf Factorization

226 Gohberg and Kaashoek

b 9.1. THEOREM. Let 8 = (A(t),B(t),C(t);N1,N2)a be a 5B-system, and

let U(t) and UX(t) denote the fundamental operators of 8 and the inverse sys­

tem 8x • The following statements are equivalent:

(9.1)

(1) 8 admits a causal/anticausal decomposition;

(2) the canonical boundary value operator P of 8 is a decomposing

projection for 8;

(3) the following Riccati differential equation has a solution

R( t) : Im P + Ker P on a s ts b:

J R(t) = (I-P+R(t)p)(-u(tf1B(t)C(t)U(t))(R(t)P-P),

1 R(a) = 0;

(4) det (I-P+U(t)-lUx(t)p) 1- 0 for as t s b.

a s t s b,

Furthermore, in that case 8 ~ 8_8+ with

(9.2) 8 = (O,(I-P+R(t)P)U-l(t)B(t),C(t)U(t)IKerp;IKerp,O)~,

(9.3) 8+ = (O,pu(t)-lB(t),C(t)U(t)(P-R(t)P) IImP;O,IImp)~,

where R(t) : ImP + KerP, a s t s b, is the solution of (9.1), which is equal

to

PROOF. Let T be the input/output map of 8. Put ~ = ~(8) (see formula

(2.2)). Then ~ is a SB-representation of T and the internal operator of ~ is

equal to the canonical boundary value operator P of 8.· Note that the funda­

mental operator ~(t) of ~ is given by

a s t s b.

It follows that P is a decomposing projection for 8 if and only if P is a

decomposing projection for ~. The equivalence of the statements (2), (3) and

(4) is now clear from Theorem 1.8.1. It remains to prove the equivalence of

(1) and (2).

Assume P is a decomposing projection for 8. From the definitions of

the left and right factor we see that fp(8) = 8_ and ~p(8) = 8+, where 8_ and 8+ are given by (9.2) and (9.3). Formula (11.4.3) implies that 8 ~ 8_8+. Ob­

viously, 8_ is a causal system and 8+ is an anticausal system. So (2) implies

(1) .

Page 237: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 227

Next, assume that 8 ~ 8_8+ is a causal/anticausal decomposition of

8. Let T_ and T+ be the input/output maps of 8_ and 8+, respectively. Then

T :: T_T+ is a LU-factorization. So we can apply Theorem 1.8.1 to show that P

is a decomposing projection for ~. Hence (2) holds. 0

A boundary value system 6 is said to admit an anticausal/causal

decomposition if 8 ~ 6+6_ with 8+ an anticausal system and 6_ a causal system.

In order that such a decomposition exists it is necessary that 6 is an SB­

system (Lemma 3.2).

9.2. THEOREM. Let 6 :: (A(t),B(t),C(t);Nl,N2)~ be a 5B-system, and

let U(t) and UX(t) be the fundamental operators of 6 and the inverse system 6x •

Let P denote the canonical boundary value operator of 6. The following state­

ments are equivalent:

(1) 6 admits an anticausal/causal decomposition;

(2) the following Riccati differential equation has a solution

R( t) : Ker P -+ Im P on a :;; t :;; b:

1 R(t):: (P+R(t)(I-P))(-u(tf1B(t)C(t)U(t))(R(t)(I-P) - (I-P)),

(9.4) a :;; t :;; b,

R(b) :: 0;

(3) det [(I - p)U(tf1ux(t) + PU(b)-lUx(b)] t- 0 for a :;; t :;; b.

Furthermore, in that case 8 ~ 8+8_ with

( 9.5) 8 _ :: (0, ( I - P )U( t ) -lB( t ) ,C (t )U( t ) (I - P - R( t )( I - P) ) I Ker p; IKer P' 0) ~ ,

(9.6) 6+::(O,(P+R(t)(I-P))U(t)-lB(t),C(t)U(t)IImP;0,IImp)~,

where R(t) : KerP -+ ImP, a :;; t :;; b, is the solution of (9.4), which is equal

to R(t) :: -P{I - P + PU(b)-lUx(b)Ux(t)-lU(t)}-\I - Pl.

PROOF. As in the proof of Theorem 9.1 put ~ :: ~(6), and apply

Theorem 1.8.2 to ~. This yields the equivalence of the statements (2) and (3).

Assume (2) holds. Define II :: (I - P) + Q(I - P), where Q :: -R(a). From

the proof of Theorem 1.8.2 we know that II is a decomposing projection for 6 and

~ ( 6) :: (0, (I - II + G (t ) I1)U (t f l B( t ) ,C( t )U (t ) 'Ker II; 0, IKer IT) ~ ,

!tIl ( 6) :: (0, nu (t ) -lB( t ) ,C (t )U (t )( II - G( t ) II) I Im II; lIm n' 0); ,

Page 238: Constructive Methods of Wiener-Hopf Factorization

228 Gohberg and Kaashoek

where G(t) = (R(t)+Q)(I-P)TI: ImII .... KerII. Since KerII = ImP, it is clear

from (8.5) that lu(8) = 8+ with 8+ given by (9.6). We shall prove that ~II(8)

is similar to 8_. Define S : Ker P .... 1m II by Sx = IIx, x E Ker P. Note that

(I-P)Sx = (I-P)(I-P+Q(I-P))x = (I-P)x = x

for each x E Ker P. Thus S is ?11 invertible operator and S-lII = 1- P. It

follows that

C(t)U(t)(II - G(t)II)S = C(t)U(t)(I - P - R(t)(I - P)).

To prove the last equality one uses (8.6) and lIS = II. We see that the operator

S establishes a similarity between ~II(8) and 8 . We conclude that

and (1) is proved.

Next, assl.U1l2 (1) holds. Then T has a UL-factorization, and we can

apply Theorem 1.8.2 to ~ = ~(8) to show that (2) holds. 0

In general, causal/anticausal decompositions (or anticausal/causal

decompositions) of' time invariant systems cannot be made within the class of'

time invariant systems. One has to allow that the factors vary in time. To see

this consider the following example. Put

Note that 8 is a SB-system. One computes that

-l+e P = [

- 1 + 2e-t UX(t) =

2 - 2e-t 2-e

-t 1 -t '

-t 1 -l+e

-t . 2-e

Now apply Theorem 9.1. It follows that 8 ~ 8_8+ with

8 =

8 = +

-t -1 1 (0,(2 - e ) ,2;1,0)0'

t -1 1 (0,-1,(2e -1) ;0,1)0.

Page 239: Constructive Methods of Wiener-Hopf Factorization

Minimal factorization 229

Thus 8 admits a cascade decomposition in which the factors are time varying.

It is easy to check that 8 does not have a cascade decomposition with time

invariant factors.

REFERENCES

1. Anderson, B.D.O. and Kailath, T: Some integral equations with non­symmetric separable kernels, SIAM J. Appl. Math. 20 (4) (1971), 659-669.

2. Bart, H., Gohberg, I. and Kaashoek, M.A.: Minimal Factorization of Matrix and Operator Functions. Operator Theory: Advances and Appli­cations, Vol. 1, Birkhauser Verlag, Basel etc., 1979.

3. Bart, H., Gohberg, I., Kaashoek, M.A. and Van Dooren, P.: Factori­zations of transfer functions, SIAM J. Control Opt. 18 (6) (1980), 675-696.

4. Daleckii, Ju. L. and Krein, M.G.: Stability of solutions of dif­ferential equations in Banach space, Transl. Math. Monographs, Vol. 43, Amer. Math. Soc., Providence R.I., 1974.

5. Gohberg, I. and Kaashoek, M.A.: Time varying linear systems with boundary conditions and integral operators, I. The transfer operator and its properties, Integral Equations and Operator Theory 7 (1984), 325-391.

6. Gohberg, I. and Kaashoek, M.A.: Time varying linear systems with boundary conditions and integral operators, II. Similarity and reduction, Report Nr. 261, Department of Mathematics and Computer Science, Vrije Universiteit, Amsterdam, 1984.

7. Gohberg, I. and Kaashoek, M.A.: S~nilarity and reduction for time varying linear systems with well-posed boundary conditions, SIAM J. Control Opt., to appear.

8. Gohberg, I. and Kaashoek, M.A.: Minimal representations of semi­separable kernels and systems with separable boundary conditions, J. Math. Anal. Appl., to appear.

9. Gohberg, 1. and Kaashoek, M.A.: On minimality and stable minimality of time varying linear systems with well-posed boundary conditions, Int. J. Control. to appear.

10. Gohberg, 1. and Kaashoek, M.A.: Various minimalities for systems with boundary conditions and integral operators, Proceedings MTNS, 1985, to appear.

11. Gohberg, I. and Krein, M.G.: Theory and applications of Volterra operators in Hilbert space, Transl. Math. Monographs, Vol. 24, Amer. ~Bth. Soc., Providence R.I., 1970.

Page 240: Constructive Methods of Wiener-Hopf Factorization

230 Gohberg and Kaa&hoek

12. Kaashoek, M.A. and Woerdeman, H.J.: Unique minimal rank extensions of triangular operators,to appear.

13. Kailath, T.: Fredholm resolvents, Wiener-Hopf equations, and Riccati differential equations, IEEE Trans. Information Theory, vol. IT-15 (6) (1969), 665-672.

14 Krener, A.J.: Acausal linear systems, Froceedings 18-th IEEE CDC, Ft. Lauderdale, 1979.

15. Krener, A.J.: Boundary value linear systems, Asterisque 75/76 (1980), 149-165.

16. Krener, A.J.: Acausal realization theory, Part I; Linear determinis­tic systems, submitted for publication, 1986.

17. Levin, J.J.: On the matrix Riccati equation, Froc. Amer. Math. Soc. 10 (1959),519-524.

18. Schumitzky, A.: On the equivalence between ~atrix Riccati equations and Fredholm resolvents, J. Computer and Systems Sciences 2 (1) (1968), 76-87.

1. Gohberg Dept. of Mathematical Sciences The Raymond and Beverly Sackler Faculty of Exact Sciences Tel-Aviv University Rarnat-Aviv Isreal

M.A. Kaashoek Subfaculteit Wis~lDde en Informatica Vrije Universiteit Postbus 7161 1007 Me Amsterdam The Netherlands

Page 241: Constructive Methods of Wiener-Hopf Factorization

231

PART II

NON-CANONICAL WIENER-HOPF FACTORIZATION

EDITORIAL INTRODUCTION

To explain the background of this part of the book consider

W(A) = 1m - j eiAtk(t)dt, -00 < A < 00, -00

where k is an m x m matrix-valued function of which the entries are in L1 (-00,00)

and 1m stands for the mxm identity matrix. A (Jtight Wie.l1eJt-Hop66a.c:toJti­

zatiOI1 of W relative to the real line is a factorization

(1) W(A) = W_ (A) -00 < A < 00,

such that K1, ... ,Km are integers, the factors W_ and W+ are of the form

W_(A) =~- f eiAtk1(t)dt, IrnA:<;O, -00

W+(A) = 1m - I eiAtk2(t)dt, IrnA ~ 0,

where k1 ani k2 are m x m matrix functions with entries in L1 (-00,0] and

L1[0,00), respectively, and

det W_ (A) t. 0 (IrnA:<; 0), det W+(A) t. 0 (IrnA ~ 0).

The integers K1 ~ ... ~ Km are uniquely determined by W and called the (Jtight) 6a.c:toJtizatiol1 incUcv.:, of the rnatr.ix function. The existence of the factoriza­

tion (1) is proved in the Gohberg-Krein paper (Arner. Math. Soc. Transl. (2) 14

(1960), 217-287) on factorization of matrix-valued functions. Its construction

is done in two steps. The first involves the case when

Page 242: Constructive Methods of Wiener-Hopf Factorization

232

00

I IIk(t)lIdt < 1. -00

In this case the factorization is just a canonical one (i.e., the indices are

all zero) and the f~ctors are obtained by an iterative procedure. The second

step concerns matrix-valued functions that are analytic in the open upper half

plane and continuous up to the boundary (infinity included). For matrix

functions of the latter type the factors are obtained by a repeated applica

tion of "a special algorithm based on element~y matrix transformations and

the factorization indices appear at the end of this procedure as multiplici­

ties of zeros of C01UIlll1 ful'1ctions. In general, the two steps are combined by

an appromixation of the original function W with a rational matrix-valued

function. This construction of the factorization does not yield explicit

formulas for the factors and it leads to the values of the indices in an

implicit way only.

As in Part I let us represent the function W in the form

(2) -00 < A < 00,

where A: ~ + ~, B: ~ + ~ and C : ~ + ~ a..~ linear operators and A has no

eigenvalue on the real line. The main problem dealt with in this part of the

book consists of constructing explicit formulas for the factors W and W and - +

the factorization :indices Kl"" ,Km interms of the three operators A, B and

C. This problem is also considered for the case when the spaces ~ and ~ in

the realization (2) are replaced by arbitrary Banach spaces and W is analytic

on the real line and at infinity.

The first paper in this part "Explici:t W..ieneJl.-Hop6 6actoJUzilion

and Jtealizilion", by H. Bart, 1. Gohberg and M.A. Kaashoek, gives the solution

of the problem mentioned above. As in the case of canonical factorization an

important role is played by the spectral subspaces

(3) M = Im (2;i f (A-A)-ldA)'

(4) MX = Ker (2;i f (A-Axf:ldA)'

where AX = A - BC and r is a suitable contour in the open upper half plane

around the parts of the spectra of A and AX in the open upper half plane. The analysis of the spaces M () MX and M + MX leads to explicit fOrnIUlas for the

factorization indices and to the construction" of incoming and outgoing bases.

Page 243: Constructive Methods of Wiener-Hopf Factorization

233

The latter are used in a sophisticated way to obtain final formulas for the

factors W and W • Since in general the factorization (1) is not minimal, one - + cannot use invariant subspace methods directly,but first one has to dilate the

original realization in an appropriate way. The results obtained in this paper

are also valid for the case when W is analytic on the real line and at infinity.

The second paper "InvaJUan.:t6 60ft WieneJl.-Hop6 equ.i.valenc.e 06 analytic. opeftatoft 6unc.:Uon6", by H. Bart, I. Gohberg and M.A. Kaashoek, concerns

operator functions that are analytic on the real line and at infinity. The

main result gives a necessary condition for Wiener-Hopf equivalence of two

such operator functions. On"the basis of this the authors show that the neces­

sary and sufficient conditions for the existence of a Wiener-Hopf factoriza­

tion are

where M and MX are given by (3) and (4), respectively. In this way they also

show that the minimality condition in the canonical factorization theorem

(Theorem 1 in the Editorial Introduction of Part I) can be omitted.

The third paper "MuLttpuc.ation by cLi.a.gonaL6 and fteduc.:Uon to c.ano­It.tc.al 6ac.toltization", by H. Bart, 1. Gohberg and M.A. Kaashoek, developes

further the calculus for matrix functions in realized form. The operation

of mUltiplication by diagonal rational matrix functions is analyzed in geo­

metrical terms. As a corollary one obtains a method to reduce rational matrix

functions to functions that admit canonical factorization.

The last paper "SymmetJtic. WienV1.-Hop6 6ac.toltizatiol1 06 ;.,e .. i6-adjoint Jtational mat/t.tx 6unc.:Uon6 and fteaiization", by M.A. Kaashoek and A.C.M. Ran, solves the main problem of the present Part II for the case when the original

matrix function i'l has selfadjoint values on the real line and the factoriza­tion is also required to be of symmetric type. The method developed in the

first paper of this part is imprOVed and adapted for symmetric factorization

problems. It turns out that this already allows one to obtain final formulas

for symmetric Wiener-Hopf factorization.

Page 244: Constructive Methods of Wiener-Hopf Factorization

Operator Theory: Advances and Applications, Vol. 21 © 1986 Birkhauser Verlag Basel

EXPLICIT WIENER-HOPF FACTORIZATION AND REALIZATION

H. Bart, I. Gohberg, M.A. Kaashoek

235

Explicit formulas for Wiener-Hopf factorization of rational matrix and analytic operator functions relative to a closed contour are const~ucted. The formulas are given in terms of a realization of the functions. Also formulas for the factorization indices are given.

O. INTRODUCTION

Singular integral equations, Wiener-Hopf integral

equations, equations with Toeplitz matrices and other types of

equations can be solved when a Wiener-Hopf factorization of the

symbol of the equation is known (see, e.g., [GK1,GF,GKr,K]),

and in general the solutions of the equations can be obtained

as explicit as the factors in the factorization are known.

Recall that a Wiener-Hopf factorization relative to a simple

closed contour r of a continuous operator function W on r is a

representation of W in the form:

(0. 1)

Here IT 1 , ••• ,IT r are mutually disjoint one dimensional

projections and E~=OITj is the identity operator. The

point &1 lies in the inner domain of rand &2 is in the outer

domain of r. The operator functions W_ and W+ are analytic on

the inner and outer domain of r, respectively, both W_ and W+

are continuous up to the boundary and their values are

invertible operators. The numbers are non-zero

integers, which are called factorization indices.

If W is a rational matrix function with no poles and

zeros on r, then W admits a Wiener-Hopf factorization relative

to r, and there exists an algorithm which allows one to find

the factors in a finite number of steps (see, e.g., [CG],

Section 1.2). Only in the scalar case explicit formulas for the

Page 245: Constructive Methods of Wiener-Hopf Factorization

236 Bart, Gohberg and Kaashoek

factors are available. In general, an arb~rary continuous

operator or matrix function does not admit a Wiener-Hopf

factorization, but necessary and sufficient conditions for its

existence are known (see [CG,GL]).

The main goal of the present paper is to produce

explicit formulas for Wiener-Hopf factorization, including

formulas for the indices, for the case when on r the operator

function W can be written in the form:

(0.2)

Here A: X + X, B: Y + X and C: X + Yare (bounded linear)

operators acting between Banach spaces X and Y and the symbol I

stands for the identity operator on Y. This covers the case of

all rational matrix functions (that are analytic and invertible

at infinity) and of all operator functions that are analytic in

a neighbourhood of r. For canonical Wiener-Hopf factorization,

i.e., all factorization indices are equal to zero, this goal

was already achieved in [BGK1], Section 4.4. In [BGK1] we

obtained the canonical Wiener-Hopf factorization explicity on

the basis of a geometrical principle of factorization, which

gives the factors in terms of invariant subspaces of the

operators A and A-BC.

The present paper is a continuation of [BGKl]. The

same geometrical principle of factorization is used, but the

construction of the factors is much more involved. It turns out

that one cannot apply directly the factorization principle to

the operators A, Band C appearing in (0,2), but starting with

the representation (0,2) one has to prepare in a special way a

new triple of operators A,B,C for which the representation

(0.2) is also valid and which can be used to obtain the factors

in terms of invariant subs paces of A and A-BC. Precisely what kind of special triples one has to look

for ('centralized singularities') is explained in Section 3 of

Chapter I. The first two sections of that chapter contain

preliminary material. In Chapter II we introduce the so-called

Page 246: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 237

incoming characteristics. These have to do with the positive

factorization indices. Also Wiener-Hopf factorizations are

constructed explicitly tor the case when all indices are non­

negative. The dual concepts (outgoing characteristics) and

Wiener-Hopt factorizations with non-positive indices are

treated in Chapter III. Finally, Chapter IV contains the main

result dealing with Wiener-Hopf factorizations without any a

priori restriction on the sign of the factorization indices.

An earlier version of this paper appeared in the

report [BGK2j. The main results were also announced in [BGK3j.

We added Sections 11.3 and 111.3 which concern the cases of

indices of one sign. The material presented in these sections

makes our main factorization theorem more transparent and many

years ago it was the starting point of this study. In the paper

[KRl, which also appears in this volume, the construction of

the factorization of the present paper is developed further and

applied to the selfadjoint case.

I. PRELIMINARIES

1.1. Preliminaries about transfer functions

In this paper we shall often think about analytic

operator functions as transfer functions. In the present

section we collect together some of the basic terminology

related to this notion. A node (or a system) is a quintet

e = (A,B,C;X,Y), where A: X + X, B: Y + X and C: X + Yare

(bounded linear) operators acting between complex Banach spaces

X,Y. The space X is called the state space; the space Y is

called the input/output space. The node e is called finite

dimensional if both X and Yare finite dimensional spaces. The

operator A is referred to as the state space or ~ operator.

The spectrum of the operator A is denoted by a(A). The operator

function

(1.1) wo.)

is called the transfer function of the node e (A,B,C;X,Y)

Page 247: Constructive Methods of Wiener-Hopf Factorization

238 Bart, Gohberg and Kaashoek

and is denoted by We(X). Here X-A stands for XI-A as usual and

the transfer function has to be considered as an operator

function which is analytic outside o(A) and at infinity. Part

of our terminology is taken from systems theory where the

transfer function describes the input/output behaviour of the

linear dynamical system

x(t) Ax(t) + Bx(t), yet) Cx(t) + x(t).

The idea of a node also appears in the theory of characteristic

operator functions as explained, for instance, in [Br] (see

also [BGK1], Section 1.4).

If X is not in the spectrum of A and W(X) is given by

(1.1), then W(X) is invertible if and only if X is not in the

spectrum of the operator A-BC, and in that case

-1 -1 W(X) = I - C[X-(A-BC)] B.

The operator A-BC is called the associate (ma~n) operator of

the node a (A,B,C;X,Y) and is denoted by AX. Note that

AX depends not only on A but also on the other operators

appearing in the node 0.

Two nodes 0 i = (Ai,Bi,Ci;Xi'Y)' i = 1,2, are said to

be similar, if there exists an invertible operator

S: Xl + X2 ' called node (or system) similarity between

01 and O2 , such that

Similar nodes have the same transfer function, but the transfer

function does not determine the node up to similarity.

The node 0 = (A,B,C;X,Y) is said to be a ~~~~}Ol~ of

the node 00 = (AO,BO'CO;XO'Y) if the state space X of 0

admits a decomposition X = Xl ~ Xo ~ X2 , where Xo is the state

space of 00 and X1 .X 2 are closed subspaces of X, such that with

respect to this decomposition the operators A, Band C have the

Page 248: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 239

following matrix representations:

A = [ : *

[0 * 1 •

One checks easily that this implies that 0 and 0 0 have the same

transfer function (on a neighbourhood of 00).

( 1. 2)

Given a node 0 (A.B.C;X.Y) we define

Ker(C!A) n KerCA j • j=O

Im(A!B)

Here V~ ON. stands for the linear hull of the spaces J= J

N1 .N 2 ••••• We say that the node 0 is minimal if

Ker(C! A) = (0) and Im( A I B) is dense in X. This no tlon is of

particular interest if 0 is a finite dimensional. The following

two important results hold true: (1) Two minimal finite

dimensional nodes with the same transfer function are similar

(this result is known as the state space isomorphism theorem);

(2) any finite dimensional node is a dilation of a minimal

(finite dimensional) node (which by the state space isomorphism

theorem is unique up to similarity). The proofs of these

results can be found. e.g •• in [BGKl], Section 3.2).

For the definition of the product of two nodes and the

geometrical principle of factorization for transfer functions

we refer to Section 1.1 of [BGKlj.

The transfer function of a finite dimensional node may

be seen as a rational matrix function which is analytic and has

the value I at infinity. Conversely. any such function is the

transfer function of a finite dimensional node. This is the so­

called realization theorem (see [KFAj). Similarly. any operator

function W which is analytic in an open neighbourhood of 00 on

the Riemann sphere and has the value I at infinity may be

written in the form (1.1) and in that case we call the right

hand side of (1.1) a realization of W. This term will also be

used for the corresponding node 0 = (A.B.C;X.Y). In Chapter 2

of [BGKlj one finds a further discussion of these and related

Page 249: Constructive Methods of Wiener-Hopf Factorization

240 Bart, Gohberg and Kaashoek

realization results.

I.Z. Preliminaries about Wiener-Hopf factorization

Throughout this paper r is a contour on the Riemann

sphere ~ u {~}. By assumption r forms the positively oriented

boundary of an open set with a finite number of components

in ~ u {~} and r consists of a finite number of non­

intersecting closed rectifiable Jordan curves. The inner domain + -of r we denote by nr • and nr stands for the open set on the

Riemann sphere consisting of all points outside r. We fiK two

points E 1 .E Z in the compleK plane such that El € n; and E Z € nr •

Consider a continuous operator-valued function

W: r + L(Y). where L (Y) denotes the space of all bounded

linear operators on the Banach space Y. A (right) Wiener-Hopf

factorization of W with respect to r (and the points E 1 .E Z) is

a representation of W in the form

( Z • 1 ) Wo.)

for each A € r. where the factors have the following

properties. By definition. IT 1 ••••• IT r are mutually disjoint one

dimensional projections of Y and E~ OIT. is the identity J= J

operator on Y. The operator functions W_ and W+ are holomorphic - + on the open sets nr and nr • respectively. both W_ and W+ are

continuous up to the boundary r and their values are invertible

operators on Y. The numbers K1 ••••• Kr are non-zero integers.

called the (right) factorization indices. which are assumed to

be arranged in increasing order Kl ~ KZ ~ ••• ~ Kr •

In general. a Wiener-Hopf factorization (assuming it

exists) is not unique. but the factorization indices are

determined uniquely by W. If in (Z.I) the projection ITO is the

identity operator on Y, Le., if

( 2 • 2) WO) A € r,

Page 250: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

then the factorization is called canonical with respect to

r. By interchanging in (2.1) and (2.2) the factors W_(A)

and W+O) one obtains a lef~ Wiener-H~.!. factoriz~J:..9~ and a

canonical left Wiener-Hopf factorization, respectively.

241

If WeAl is a rational matrix function with no poles

and zeros on r, then W admits a Wiener-Hopf factorization with

respect to r and there exists an algorithm to construct the

factors (see [GF,CG). In general, a Wiener-Hopf factorization

does not exist, but necessary and sufficient conditions for its

existence are known. For example, if Y is a Hilbert space

and r is closed and bounded, then W admits a right Wiener-Hopf

factorization with respect to r if and only if the

corresponding Toeplitz operator (acting on the space of all Y­

valued L2-functions on r with an analytic continuation + to nr ) is Fredholm (see [GL).

Now assume that W is the transfer function of the

node 8 (A,B,C;X,Y), i.e.,

(2.3) W(A) A € r.

To ensure that W is continuous on r we require that r does not

intersect the spectrum cr(A) of A, which implies that W is

analytic on a neighbourhood of r. Conversely, if W is analytic

on a neighbourhood of r (and is normalized to I at

- whenever - € r), then one can construct explicitly operators

A, B, and C such that (2.3) holds (see [BGK1), Section 2.3).

An m x m rational matrix function W, which is analytic and has

the value I at -, can be represented in the form (2.3) with A,

Band C acting on finite dimensional spaces and the

node 0 = (A,B,C;X,Y) being minimal. (In the latter case the

continuity of W on r is equivalent to the statement that A has

no eigenvalues on r.) By assumption WeAl has to be invertible

for each A € r. Since cr(A) n r = ~, this implies that the

spectrum of the associate operator AX = A-BC does not

meet r (cf. Corollary 2.7 in [BGK1). So we shall assume that W

is the transfer function of a node 8 = (A,B,C;X,Y) without

Page 251: Constructive Methods of Wiener-Hopf Factorization

242 Bart, Gohberg and Kaashoek

spectrum ~ the contour r, which means that A and AX have no

spectrum on r. Our aim is to find a Wiener-Hopf factorization of the

function (2.3) in terms of the operators A, Band C. For

canonical factorization we solved this problem in [BGKlj. The

following theorem holds.

THEOREM 2.1. Let W be the transfer function of a

node 9 = (A,B,C;X,Y) without spectrum ~ r. Let M ~ the

spectral subspace.£!. A corresponding ~ the ~ ~ a(A) + x x

in Or' and let M ~ the spectral subspace.£!. A corresponding x -

~ the ~.£!. a(A ) .!E. Or. Then W admits ~ canonical right

Wiener-Hopf factorization with respect ~ r ..!!. and ~..!!.

(2.4) x

~ (2.4) ~ satisfied and II ~ the projection .£!. X along M x

onto M , then ~ canonical Wiener-Hopf factorization of W with

respect to r is given !L W(~) = W_(~)W+(~), where

W (A) I + Co.-A)-I(I-II)B,

W+o.) I + CIIo.-A)-I B,

W 0.)-1 - I - C(I-II)(~-Ax)-1B,

w+o.)-1 I - Co.-A x )-I IIB •

Note that the space M introduced in the above theorem

is the largest A-invariant subspace such that the

of AIM is + in !'lr. Similarly, MX is the

invariant subspace such that a(AxIM x )

bounded open set in ~, then

M 1 -1

Im(2wi f (~-A) d~),

MX = Ker(--I--. f (~_Ax)-1d~). 2wl. r

largest AX_

is in I'lr· If

spectrum

+ Or is a

We call M, MX the ~~ spectral subspaces associated with

Page 252: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 243

e and r. The sufficiency of the condition (2.4) and the

formulas for w_ an~ W+ are covered by Theorem 1.S in [BGKlj.

Theorem 4.9 in [BGK1] yields the necessity of condition (2.4)

for the case when e is a minimal finite dimensional node. (The

fact that in [BGK1] the set n; is assumed to be bounded is not

important and the proofs of Theorems 1.5 and 4.9 in [BGK1] go

through for the more general curves considered here.) The

necessity of the condition (2.4) for general nodes is proved in

[BGK4].

Let us now consider the case when M n MX is non­

trivial and/or M + MX has a non-trivial complement in X. In

that case W does not admit a canonical factorization. We have

the following theorem.

THEOREM 2.2. Let W be the transfer function of a x

node e without spectrum ~ r, and let M, M ~ the ~~

spectral subspaces associated with e and r. Then W admits a

right Wiener-Hopf factorization with respect..££. r 2.!. and ~

if

(2. 5) dim(X/M+M X ) < 00.

The sufficiency of the condition (2.5) in Theorem 2.2

is a corollary of the main factorization theorem proved in this

chapter (cf. Theorem 3.1 in Ch.IV). The necessity of (2.5) is

proved in [BGK2]. Note that (2.5) is automatically fulfilled if

X is finite dimensional. In particular, (2.5) holds if e is a

minimal node of a rational matrix function.

1.3. Reduction of factorization to nodes with

centralized singularities

In this section e = (A,B,C;X,Y) is a node without

spectrum on r. Let M, MX be the pair of spectral subs paces

associated with e and r. Under the condition that

( 3 • 1 ) dim(X/M+M X) < 00,

Page 253: Constructive Methods of Wiener-Hopf Factorization

244 Bart, Gohberg and Kaashoek

we want to give an explicit construction of the factors in a

Wiener-Hopf factorization of W(A) = I + C(A-A)-I B in terms of

A, Band C. To indicate how this may be done, note that

condition (2.4) in Theorem 2.1 is equivalent to the statement

that the state space X admits a decomposition into two closed

subspaces, X = Xl • X2 , such that relative to this

decomposition the operators A and AX can be written in the form

A

where Al and A; have their spectra in n; and the spectra of A2

and A; are in n;. Furthermore, in this language the factors in

the canonical factorization W(A) = W_(A)W+(A) are given by

W 0) -1 I + C1 (A-A 1 ) B1 ,

-1 I + C2 (A-A 2 ) B2 ,

where

B [ BB21 1 ' C = [C 1

are the operator matrices of Band C relative to the

decomposition X = Xl • x2 •

For the non-canonical case we shall see that in order

to find a Wiener-Hopf factorization (relative to r and the points E1 ,E 2 ) of the function W(A) = I + C(A-A)-I B,

one has to look for a decomposition of the space X into four

closed subspaces, X

dimensional, such that with respect to this decomposition the

operators A, AX , B and C can be written in the form

Al * * * AX 0 0 0 0 A2 0 * AX *1 AX 0 0 (3.2) A 0 * * 0 2 X

Ox 0 A3 A3 0 0 0 A4 * * * A4

Page 254: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

(3.3) c

and the following properties hold:

(1·) h fAX i"+ t e spectra 0 AI' I are n "r;

(ii) the spectra of A4 , A: are in Il r ;

X (iii)the operators AZ-£I and AZ-£Z have the same

(3. 4a)

(3.4b)

(3.Sa)

(3.Sb)

nilpotent Jordan normal form in bases

d j , k+ I' k z I, ••• ,a., J

l,··.t a ., J

o and j = l, ••• ,t, and

the two bases are related by

k-l k-l u d jk = E ( )(£Z-£l) e. k- ,

u=O u J, U

where k = l, •••• a .• j = l, •••• t; J

(iv) rank BZ = rank Cz = t with t as in (iii);

(v)

245

Page 255: Constructive Methods of Wiener-Hopf Factorization

246 Bart, Gohberg and Kaashoek

w. nilpotent Jordan form in bases {f } J sand

jk k=-l. j=-l

Wj s {g } respectively. i.e ••

j k k=- 1 • j=- 1 •

(3.6a) 1, •.. ,Ul., J

(3.6b) g j • k+ l' k l, ... ,w., J

where f. 1 =- g =- 0 and j J.w.+ j.w.+1

J J 1 ••••• s. and

the two bases are related by

(3.7a) k-1

( u+ w . - k ) ( ) u E J E 2-E 1 f. k- • u=-o u J. u

(3.7b) k-l

( u+w.-k)( )u E J E 1-E 2 g. k- • u=-o u J. u

where k 1 ••••• w. and j =- 1 ••••• 5; J

(vi) rank B3 rankC 3 =- s with s as in (v).

We shall suppose that the numbers a. and w. appearing in (iii) J J

and (v) are ordered in the following way:

A node e =- (A.B.C;X.Y) for which the state space X

admits a decomposition X =- Xl $ X2 $ X3 $ X4 with the

properties described above will be called a ~_~_ wi~~

centralized singularities (relative.!2. the conto-'.!.E.. r ~nd the x x

points E1 .E 2). Note that the operators A2 .A3 and A2 .A3 appearing in the centers of A and AX all act on a finite

dimensional space and their spectra consist of a single

eigenvalue which is either E1 or E2 • It follows that the

diagonal elements in the operator matrices (3.2) have no

spectrum on r. and hence A and AX have no spectrum o~ r. Thus a node with centralized singularities in a node without

Page 256: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

spectrum on r. Let M, MX be the associated pair of spectral

subspaces. From the fact that the spectra of Al and A2 are

in n; and the spectra of A3 and A4 in n;, we conclude that

M = Xl $ X2 • In a similar way one shows that MX = X2 $ X4 •

It follows that X2 = M n MX and X3 is a complement of M+M x

in X. In particular, we see that for a node with centralized

singularities condition (3.1) is fulfilled. One of the main

points in the definition of a node with centralized

singularities is given by the following theorem.

THEOREM 3.1. Let W be the transfer function of a

247

node 0 with centralized singularities, ~, for the node 0 the

formulas (3.2), (3.3) and the properties (i) - (vi) hold true.

Put

W (A) I + -1

C1 (A-A 1 ) B1 ,

W+(A) I + -1

C4 (A-A 4 ) B4 ,

D(A) I + -1

C2 (A-A 2 ) B2 + -1 C3 (A-A 3 ) B3 •

Then

(3.8) W(A) W_(A)D().)W+(A), ). E r,

and the operator function DCA) is of the form ---- --- -- ---t ).-e: -Cl. s ).-e: 1 w.

( 3.9) DO.) IIO + E (-~) J II + E (---) J II ., . ).-e: - j j=1 X-e: 2 J J=1 2

where TI_ t , ••• ,TI_ 1 ,TI 1 , ••• ,II s ~ mutually disjoint ~

dimensional projections of Y and E~ II. _is the identity ====== .L._= .... _==~..:.. -- -- J =-t J operator_~ Y. Furthermore, the factorization (3.8) is_ ~~_!l!~

Wiener-Hopf factorization ~ W relative ~ r and the

points e: 1 ,e: 2 • ~ particular, -Cl 1 , ... ,-Cl t , w1 , ... ,w s _~ .. E~ the

right factorization indices.

-1 PROOF. Let Wo.) = I + CO.-A) B with A, Band C ilS lo

Page 257: Constructive Methods of Wiener-Hopf Factorization

248 Bart, Gohberg ana Kaashoek

(3.2) and (3.3), and assume that the properties (i) - (vi) hold

true. From the geometrical factorization principle (in Section

1.1 in [BGK1]) and the special form of the operator matrix

representations for A and AX in (3.2) it is readily seen that

the factorization (3.8) holds true. Further, the spectral x x

conditions on Ai' Al and A4 , A4 (see the properties (0, (ii»

imply that the operator functions W_ and W+ have the properties

which are necessary for the factors in a right Wiener-Hopf

factorization. Thus it remains to show that the middle

term D(~) in the right hand side of (3.8) has the desired

diagonal form. This will involve the properties of the center

parts only.

From the special form of the operator matrix for

A_Ax one easily sees that

(3.10)

(3.11)

We shall need the following identities:

Cl.-1 J Cl j u+1 I: (+1)(e 1-e 2 ) d. _;

u=O u J,Cl j U (3.12)

(3.13) 0, k = 1, ••• ,Cl j -1;

(3.14)

One proves these identities by straightforward calculation

(cf., e.g., the proof of formula (1.11) in Section 11.1 below).

Put Zj = c2 e jCl • for j = 1, ••• ,t. According to (3.10)

we have B2z. = B2C2e. J= (A 2-A 2x)e. and B3z J. = O. From (3.12) J JCl. JCl.

it is clear that the vJctors B2 z 1 , •• l,B 2 z t are linearly

independent. Since rank B2 = t, we may conclude that

(3.15)

Page 258: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 249

and the vectors zl, ••• ,Zt are linearly independent. Note that

zl, ••• ,Zt are in 1m CZ• But rank Cz is also equal to t. Hence

1m Cz = sP{zl, ••• ,Zt}. We shall see that this implies that

(3.16) 0, k 1, ••• ,a.-l. J

x Indeed, take 1 ~ k ~ a j -l. From (Az-Az)e jk 0 it is clear that

B2CZejk = O. It follows that c2e jk E 1m C2 n Ker BZ'

but the latter space consists of the zero element only (cf.

(3.15)). From (3.16) and (3.Sb) we may conclude that

(3.17)

Next, we note that

DO)z. J

z. + J

z. + J

z. + J

Cz { E 8=1

'" C 2 { E

8=1

\-e: 2 a. (-r::"~ ) J z j •

1

o (k=I, ••• ,a.-l). J

Put y. = ( _1) c3g j1 for j = 1, ••• ,s. Using the J e: 1 e: 2 Wj

identities (3.11) and (3.14) we see that B2 Yj = 0 and

B3 Yj = gjl. It follows that Yl' ••• 'Ys are linearly independent

vectors in Ker BZ and, since rank B3 = s, there exists a closed

subspace YO of Y such that

Page 259: Constructive Methods of Wiener-Hopf Factorization

250

(3.19)

Obviously C3g. € JU

1m C3 • Since rank s

c3g ju = Ek=I YkYk·

Bart, Gohberg and Kaashoek

(0) (cf. (3.15». Next, we prove

1m C3 • The vectors Yl' ••• 'Y s are also in

C3 is equal to s, it follows that

Now

s s E yk B3Yk = E Yk 8 k1 •

k=1 k=1

On the other hand (using formula (3.14»:

It follows that Yk = 0 for k * j and Yj

which proves (3.19). Note that

D(~)y. J

W. 1

~ Yj + 8:~ (A-€ )8 C3 g j8 2

Yj + ~j 1 8(€2-€1)8(~j)Yj 8=1 (A-€2)

A-€1 Wj (~) Yj •

2

Obviously, D(A)Y O = yo for each Yo € yO·

For 1 ~ j ~ t define IT_ j to be the projection of Y

onto the space spanned by Zj along the space spanned by YO' the

vectors Yl' ••• 'Ys and the vectors zk(k F j). Similarly,

Page 260: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

vectors yl ••••• y s and the vectors zk(k f j). Similarly.

for 1 =< j < s define IT. to be the projection onto sp{y.} = J J along the space spanned by YO' the vectors zl ••••• Zt and the

vectors Yk(k ; j). Further. we define ITO to be the projection

of y onto YO along the vectors zl ••••• Zt and yl ••••• y 6. Then

the projections IT_t ••••• IT_l.ITO.ITl ••••• ITs have the desired

properties and formula (3.9) holds true. 0

251

The converse of Theorem 3.1 also holds true. that is.

if the operator function W is analytic on r (and normalized to

I at m if m € r) and if W admits a right Wiener-Hopf

factorization with respect to r. then W is the transfer

function of a node with centralized singularities. This is

proved in [BGK4].

We say that a node e = (A.B.C;X.Y) with centralized

singularities is simple if the spaces Xl and X4 consist of the

zero element only. In that case X = X2 e X3 with X2 and X)

finite dimensional and with respect to the decomposition

X = X2 e X3

A

C

where the specified operators have the properties described in

(iii) - (vi) above. A simple node with centralized

singularities has by definition a finite dimensional state

space. Moreover such a node is a minimal node. Note that the

proof of Theorem 3.1 shows that the transfer function of a

simple node with centralized singularities is the diagonal term

in a Wiener-Hopf factorization. In [BGK4] it is proved that.

conversely. any minimal realization of the diagonal term in a

Page 261: Constructive Methods of Wiener-Hopf Factorization

252 Bart, Gohberg and Kaashoek

Wiener-Hopf factorization is a simple node with centralized

singularities.

Let 0 = (A,B,C;X,Y) be a node with centralized

singularities, and assume that 0 is finite dimensional and

minimal. Then the factorization (3.8) is a minimal

factorization (cf. Theorem 4.8 in [BGKl]). However, between the

factors in a Wiener-Hopf factorization there may be pole-zero

cancellation and hence, in general, such a factorization is not

a minimal one. It follows that a node without spectrum on rand

satisfying condition (3.1) does not have to be a node with

centralized singularities. The best one can hope for is that a

dilation with a sufficiently large state space has centralized

singularities.

It turns out that the dimension of the state space is

not the only obstruction. It may happen that there exists a

dilation 0 such that 0 does not have centralized singularities

while on the other hand the state space dimension of 0 is as

large as one likes. To see this we consider the following

example.

EXAMPLE 3.2. Put W(A) = (A-El)/(A-a), where

a € nr , a * £2. The scalar function W(A) is the transfer

function of the minimal node 00 (a,l,a-£l;~'~) and

A-£l A-e: 2 (>:=E)(~) ,

2 (3.20) w(A) A € r,

is a right Wiener-Hopf factorization of W relative to r. Let

AI: Xl + Xl be a linear operator on the n-dimensional linear

space Xl' and assume that the Jordan normal form of Al consists

of a single Jordan block with eigenvalue b, say • We shall

suppose that b • r and b t- a, b t- El· Put X = Xl e ([, and

consider

(3.21) • = [l c

The node 0 = (A,B,C,jX,([) is a dilation of the node 00 and the

dimension of its state space is n+l. Observe that the associate

Page 262: Constructive Methods of Wiener-Hopf Factorization

~xplicit factorization 253

main operator is given by

It follows that A and AX have no spectrum on r. Since b F a. any A-invariant subspace Z is of the

form Z = N • L. where N is an AI-invariant subspace of Xl and

L = (0) or L ~. From the fact that the Jordan normal form of

Al consists of a single Jordan block it follows that the

invariant subspace structure of Al is very rigid and consists

of a chain

(3.22)

of n+l subspaces. Since El * b. the invariant subs paces of

AX are of the same form as those of A. Now. let Z and ZX be x

nonzero subspaces invariant under A and A • respectively. and

assume that X = Z • Zx. We know that Z = N ~ Land

ZX NX • LX where Nand NX are members of the chain (3.22).

Thus N C NX or NX c N. But then the fact that

(0) implies that either N = (0) or NX = (0). It

follows that there are only two possibilities. namely

Z = Xl • (0). ZX = ( 0) • ~ or Z = ( 0) ~ x

~.Z = Xl •

( 0) •

In the first case the corresponding factorization is

wO) = 1.WO) and the second case gives WO) = WO) .1.

Thus. the geometrical principle of factorization applied to the

node e yields only trivial factorizations and does not give the

Wiener-Hopf factorization (3.20). In this way one sees that the

node 0 does not have centralized singularities.

To obtain a dilation with an infinite dimensional

state space which does not have centralized singularities. we

take for the operator Al in (3.21) a unicellular operator (see

[GK2]. Section 1.9) which acts on an infinite dimensional

Banach space Xl and whose spectrum consists of the point b

only. For example. we may take Xl = L2 [0.1] and

Page 263: Constructive Methods of Wiener-Hopf Factorization

254

1 (A 1f)(t) = b - 2i J f(s)ds.

t

Bart, Gohberg and Kaashoek

Then the node e = (A,B,C;X,Y) has an infinite dimensional state

space and with arguments similar to the ones used before one

sees that e does not have centralized singularities.

From the previous example it is clear that not any

dilation with a sufficiently large state space has the desired

centralized singularities. To obtain the Wiener-Hopf

factorization one has to construct a specific dilation. This is

exactly the problem which is solved in the next sections.

Starting with a node e = (A,B,C;X,Y) which has no spectrum

on r and for which condition (3.1) is fulfilled, we shall

construct explicitly in terms of A, B, C and the spaces M x

and M a dilation of e with centralized singularities. Among

other things the following theorem will be proved.

THEOREM 3.3. Let e = (A,B,C;X,Y) ~..!!... node without x

spectrum.£.!!. r, and let M,M ~ the ~.£!. spectral subspaces

associated with e and r. Assume

x and let K ~e ..!!... complement.£!. M+M in x. Then e ~ ~ dilated

.!£...!!... node e with centralized singularities and with state space

II. INCOMING CHARACTERISTICS

11.1. Incoming bases

Let e = (A,B,C;X,Y) be a node without spectrum on x r, and let M,M be the pair of spectral subspaces associated

with e and r. In this section we study complements of M+M x in

X. Throughout this section we work under the assumption that

( 1. 1)

Since M,M x are operator ranges, the same is true for M+M x ,

Page 264: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 255

x and hence (1.1) implies that M+M is closed in X.

We begin by considering the sequences of incoming

subspaces for 0. These subspaces are defined by

(1. 2) M + MX + 1m B + 1m AB + ••• + 1m Aj - 1 B,

where j 0,1,2, •••• Of course HO is meant to be M+M x •

Note that the spaces H j do not change if in (1.2) the operator x

A is replaced by A • Obviously HO C HI C H2 C •••• Since HO

M+M x has finite codimension in X, not all inclusions are

proper. Let w be the smallest integer such that Hw = Hw+ 1 • We

claim that H X. To prove this, first note that H is w x w

invariant under A. Thus H M + M + Im(AIB) (cf. formula w

(1.2) in Ch.I), and the next lemma shows that H X. x w

LEMMA 1.1. The space M + M + Im(AIB) is equal to X.

PROOF. Write X Xl e XO' where

Xl = M + MX + Im(AIB). We know that Xl is invariant under A

and AX. Further, since AX - A = BC, the compressions of A x

and A to Xo coincide. It follows that the operator matrices of

A and AX with respect to the decomposition X = Xl e Xo

are of the following form:

A

Since A has no spectrum on rand Xo is finite dimensional, also

AO has no spectrum on r. Let MO (resp. M;) be the spectral

subspace of AO corresponding to the part of cr(AO) inside (resp. x x

outside) r. Then MO C M + Xl = Xl and MO C M + Xl = Xl' and x

thus MO MO = {OJ. But then cr(AO) must be empty, which proves

that Xo {O}. 0

Let E be a complex number. Note that the spaces Hj do not

change if in (1.2) the operator A is replaced by A-E.

Page 265: Constructive Methods of Wiener-Hopf Factorization

256 Bart, Gohberg and Kaashoek

It follows that

(1. 3) k 1, 2, • •• •

We shall use these identities to construct a system of vectors

(1. 4) k 1, ••• ,00., J

j 1, ••• J S

with the following properties:

=< 00 • s'

x (2) (A-e)f jk - fjk+1 e: M + M + 1m B,

where by definition f. +1 = 0; J , 00 j

k 1, ••• ,OOj'

(3) the vectors f jk , min{ploop ~ k} ~ j ~ s, form a

basis for Hk modulo Hk- 1 •

Such a system of vectors we shall call an incoming basis for

a (with respect to the operator A-e).

For all incoming bases for e the integers sand

ool""'oos are the same and independent of the choice of e.

In fact

(1. 5)

(1. 6)

Both formulas are clear from condition (3). We call00 1 , ••• ,oos

the incoming indices of the node a. Observe that

OOs = 00. The incoming indices are also given by the

following identity:

(1. 7)

Page 266: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 257

w Note that for an incoming basis {f } j s the following

jk k=l,j=l holds:

(3a) the vectors f jk , k = 1, ••• ,00., j J x

a basis for a complement of M+M ;

(3b) the vectors f ll , ••• ,f s1 form a basis

of M + MX + 1m B modulo M+M x •

1, ••• ,s form

Conversely, if (1.4) is a system of vectors such that (1), (2),

(3a) and (3b) are satisfied, then the system (1.4) is an

incoming basis for 6. This is readily seen by using (1.3).

We now come to the construction of an incoming basis,

which is based on (1.3) and uses a method employed in [GKS],

Section 1.6. Put

O,l, ••• ,w.

Obviously Sw = dim(Hl/H O) = s. Fix k € {1, ••• ,w}. From (1.3)

it is clear that A-E induces a surjective linear transformation

This implies that sk ~ sk_l and dim Ker[A-E] = sk-sk_l.

Assume [A-E] [f] = 0, where [f] denotes the class in

Hk /Hk _ 1 containing f. Then there exists g € [f] such that

(A-E)g € HI. Indeed from [A-E][f] = 0 it follows that

(A-E)f € Hk and so by (1.3) there exists h € Hk _ 1 such that

(A-E)(f-h) € HI. The vector g = f-h has the good properties.

Using this result we can choose vectors f jk ,

sk_l+1 ~ j ~ sk' in Hk , linearly independent modulo Hk- 1 , such

that [f +1], ••• ,[f] is a basis of Ker[A-E] and sk_l sk

Now let fsk+l,k+l, ••• ,fs,k+l be a basis for Hk+ 1 modulo

Page 267: Constructive Methods of Wiener-Hopf Factorization

258 Bart, Gohberg and Kaashoek

Hk • According to (1.3) there exist vectors f jk .sk+1 ~ j ~ s

in Hk such that

Clearly. the vectors f jk • sk_1+1 ~ j ~ s. form a basis of Hk

modulo Hk- 1 • Repeating the construction for k-1 instead of k it

is clear how one can get in a finite number of steps an

incoming basis. Note that ~j = k whenever sk_1+ 1 ~ j ~ sk.

Let the system of vectors (1.4) be an incoming basis

for e with respect to A-e. Put.

(1.8) l, .... s}.

Clearly. X x

(M+M ) ~ K. Define T: K + K by

(1.9) (T-e)f jk = f jk+ 1 • k = 1 ••••• ~j.

where. as before. f. +1 = O. We call T the incoming J'~i

operator for e associaeed with the incoming basis (1.4). Note

that with respect to the basis (1.4) the matrix of T has Jordan

normal form with e as the only eigenvalue and the sizes of the

Jordan blocks are ~l' ••• '~s.

The next proposition shows how a given incoming basis with

a parameter e may be transformed into an incoming basis with a

different complex parameter.

Page 268: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 259

PROPOS ['fION 1. Z. Let E 1 a~ E Z ~ .£9_~J:.~~ numbers, and

let .£ll~~.:~~_~1!l_ (1.4) be_an_~~,:.9~.E£' basi~ for 8 ~~ resp_ect

t 0_ A - E l' Put

(1.10)

!.h..~ {gjk}~.h,j:o is an ~E:.coming £.~is for 8 with re~~!.£. .!E~~_~.!_~_~.!"_ A-E Z ' ~_~.E_~.E., if_ T1 and T Z E~ '!'!'~J3..Co~}_I!£.

~_~~~~~ ~~~.9_':~~~ ~~~ the !ncoming ~ases (1.4) and (1.10),

~_~P_~~_~~~5':...~y_, then T1 ~~~ T Z act on the~~~_~~~ K and

(1.11)

for k l, ••• ,wo, j J

l, ..• "s.

PROOF. For 1 < k < Wo we have gOk- f ok E: Hk l' It = = J J J -follows th~t condition (3) of an incoming basis holds for the

vectors (1.l0). To check condition (2), let us write E ~ g

whenever f-g E: H t • We have

l~ 0

- - (E 2-E 1) J fjl - O.

This shows that take w 0 -1. J

Then

Page 269: Constructive Methods of Wiener-Hopf Factorization

260 Bart, Gohberg and Kaashoek

k u+ooj-k U

- f j ,k+l + U:l ( U )(&2-&1) f j ,k-u+l +

k u+ooj-(k+l) u t (u-l )(&2-&1) fj k-u+1

u=l '

- fj,k+l + g j, k+l •

Thus (A-&2)gjk-g jk+1 € HI. Hence the system (1.10) is an

incoming basis.

Now consider the corresponding incoming operators.

Clearly, T1 and T2 act on the same space. We have

Take k Wj • Then (T 2-E 2 )gjw. = 0, and one finds that J

Since fjl = gj1' this proves (1.11) for k = OOj.

Finally, take 1 ~ k ~ ooj-l. We have

Page 270: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 261

k-Z U+wj-k u+1 + I: ( )(E Z-E 1 ) f. k-

u=O U J, U

Since fj1 gj1' the proposition is proved. 0

w. If the vectors of two incoming bases {f' k }k:1 ':1 and

Wj s J ,J {gjk}k=l,j=l are related by (1.10), then reversely

k-1 u+w.-k (LIZ) I: ( J }(E1-EZ)Ug ' k-u.

u=O U J,

This is proved by direct checking. It follows that (1.11) may

bOe replaced by

(1.13)

We conclude this section with a remark about the

definition of a node with centralized singularities as given i~

the preceding section. Let 8 = (A,B,C;X,Y) be such a node, and w. w.

{ J s J s . let f jk }k=l,j=l and {gjk}k=l,j=l be the bases of X3 lntroduced

in property (v) of a node with centralized singularities. Since

the operator B3 appearing in property (vi) of a node with

centralized sinffi~larities has rank s, one checks easily that

the basis {f jk}k:l,j:1 ~s an incoming basis for 8 relative to

the operator A-E l and A3 i~.the corresponding incoming

operator. Similarly, {gjk}k2 1 ,j:l is an incoming basis for 8

relative to the operator A-E Z and A3 is the corresponding

incoming operator. The formulas (3.7a) and (3.7b) in Ch.I tell

us that the two bases are related as in (1.10) and (l.lZ).

Page 271: Constructive Methods of Wiener-Hopf Factorization

262 Bart, Gohberg and Kaashoek

11.2. Feedback operators related }o incoming base~

In this section we continue the study of incoming

bases. Again e = (A.B.C;X.Y) is a node without spectrum on the

contour rand M.Mx is the pair of spectral subspaces associated

with e and r. We shall assume that

dim(X/M+M x ) < 00.

Throughout this section £1 and £2 are two fixed complex

numbers. £1 € n; and £2 € n;. A triple

(2.1)

is called a triple ~associated incoming data for the node e (with respect to the contour r and the points £1'£2) if

Wj s Wj s {f jk }k=l.j=l and {gjk}k=l,j=l are incoming bases for e with

respect to the operators A-£l and A-£2' respectively. and the

following identities hold true:

( 2.2)

(2.3) j 1, ••• ,s.

The construction of a triple of incoming data is

readily understood from the results of Section 11.1. Indeed. if Wj s

one starts with an incoming basis {f jk}k=l.j=l for e with

respect to the operator A-£l' then according to Proposition 1.2 w.

formula (6.2) defines an incoming basis {gjk}k~l.j:l for e with

respect to the operator A-£2. Further. we know that the vectors x x

f 11 ••••• f s1 form a basis of M+M +Im B modulo M+M ; thus there

e It 1 s t ve c tor s y 1> •••• Y s . 1 n Y s u c h t hat f j 1 - By j € M+ M x for

j = 1 ••••• s. Since fjl = gjl' we see that (2.3) holds true.

Note that the vectors yl ••••• y s form a basis of Y modulo -1 x B [M+M]; in particular

Page 272: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 263

(2.4) y

With a triple of associated incoming data Vin is

related in a natural way a complement of M+Mx, namely the space

K spanned by the vectors f jk , k = 1, ••• ,w j , j = 1, ••• ,s. Of

course, cf. (2.2), the space K is also spanned by the vectors

gjk' k = l, ••• ,w j , j = l, ••• ,s. The incoming operator

corresponding to the first incoming basis in Vin is denoted by

Tl and the incoming operator corresponding to the second

incoming basis in Vin is denoted by T2 j both Tl and T2 act on

the subspace K.

Two operators F l ,F 2 : K + Yare called a pair _~

fee.!back_ operators corresponding.!.£. Vin if

(2.5) Ax - Tlx + BF 1 x E M+M x (x E K);

( 2 • 6) x x (x E K)j A x - T2x - BF 2x E M+M

(2.7) (C+F 1 + F2 )fjk = w. k

(k J )(E 1-E 2 ) Yj"

From (1.10) one easily deduces that instead of (2.7) we may

write

(2.8)

The term "feedback operator" is taken from systems theory,

where in general this term refers to an operator from the state

space X into the input space Y (cf. [KFA,W]).

The construction of a pair of feedback operators F l ,F 2 proceeds as follows. First, recall that Afjk - Tlfjk is an

x element of M+M + 1m B. So for some u. k E Y we have x J

Afjk - Tlfjk + BU jk E M+M • Now define F l : K + Y by setting

'lf jk = Ujk. Then Fl satisfies (2.5). Next, we choose

'2: K + Y such that (2.7) holds. This defines F2 uniquely, and

it remains to show that (2.6) is satisfied. According to (1.13)

Page 273: Constructive Methods of Wiener-Hopf Factorization

264 Bart, Gohberg and Kaashoek

and (2.7) we have

x A fjk - TZ f jk - BF Z f jk

Afjk - T1fjk + BF 1 f jk + (T 1-T 2 )fjk - B(C+F l+F Z)f jk

Afjk - T1fjk + BF1fjk + k w.

(£l-£Z) (k J )(f j1-BYj)'

which belongs to M+M x by (Z.5) and (2.3). This completes the

construction of the operators F1,F Z•

A pair of feedback operators F1 ,F Z corresponding to a

triple of associated incoming data Vin is said to be an

improved pair of feedback operators if the following additional

properties hold true:

(Z.9) Zl x := Ax - T1x + BF 1 x € M (x € K);

(Z.10) Z2 x := AXx - TZx - BFZx € MX (x € K) ;

(Z.ll) Zl x = Z2 x = 0 for x € K n Ke r ( C+ F 1 + F Z ) •

This last condition is automatically fulfilled

whenever M n MX = (0). To see this we subtract (Z.10) from

(Z.9) which yields the following identity

(Z.12) (x € K).

Now, if M n MX = (0), then X = M • K • MX, and hence the three

terms in the right hand side of (Z.lZ) are zero whenever

(C+F1+FZ)x = O. In particular, (Z.ll) holds true.

Any triple of associated incoming data Vin can be

transformed into a new triple of associated incoming data which

has an improved pair of feedback operators. To see this start

with the triple Vin given by (Z.l), and let F-l,F Z be an

arbitrary pair of feedback operators for this triple. Since

M n MX is finite dimensional, one can construct closed

Page 274: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 265

x subspaces Land L such that

(Z.13)

So, according to formulas (Z.5) and (Z.6) there exist operators x x x

V1 ,V Z : K + Land V1 ,V Z: K + L such that

(2.14) Ax - T 1 x + BF 1 x (x € K);

(Z.15)

Next we use that €1 is inside r. So the spectra of T1 and of

the restriction AXIM x are disjoint. Hence there exists a unique

operator U1 : K + MX such that

(Z.16) (x € K).

Similarly, using that €Z is outside r, there exists a unique

operator UZ : K + M such that

(Z.l7) (x € K).

Put

(2.18)

(2.19)

LEMMA 2.1. The triple

w. w. V' = ({fl } J s {' } J s { } s )

in jk k=l,j=l' gjk k=l,j=l' Yj j=l

is a triple ~ associated incoming data and Fi,Fi ~~ improved pair ~ feedback operators corresponding ~ Vin.

Page 275: Constructive Methods of Wiener-Hopf Factorization

266 Bart, Gohberg and Kaashoek

PROOF. For x E K we have

Recall that V1 ,V 2 and U2 map K into M. It follows that

(A-el)(fjk+Ulfjk+U2fjk) +

-{(T1-e1)f jk + Ul(Tl-el)fjk + U2(Tl-el)fjk}

- B(F1-CU1)f jk + (V 1-V 2 )f jk + U2(T2-Tl)fjk'

which is an element of M+ImB.

of M+M x • it is clear that the

j = 1, ••• ,s, form an incoming

Since fjk-fjk is

vectors fjk' k = basis for e with

an element

l, ... ,w., J

respect to the

operator A-e 1 • Further. Fi is a feedback operator which has the

property laid down in (2.9).

Next one computes that

x x x for each x E K. Since V1 ,V 2 and U1 map K into M , we may

conclude that

(x E K).

Page 276: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 267

In particular, x

(A-€2)g"k - gJ"k+1 € M + 1mB, where gJ"w,+l x J J

€ M+M , it is clear that the vectors gjk'

o. Since gjk-gjk

k = 1, ••• ,w" J

j = 1, ••• ,s, form an incoming basis for e with

respect to the operator A-€2. Note that Fi is a feedback

operator which has the property laid down in (2.10).

Since gjk and fjk are related through formula (2.2),

this formula remains true if gjk is replaced by gjk and fjk by

f ' f f' x d ' x jk· From jl- jl € M+M an gjl- g jl € M+M we conclude that

(2.3) also holds with fjl replaced by fjl and gjl by gjl. So we

have proved that the triple Vin is a triple of associated

incoming data. Since

(2.20)

it is clear that the operators Fi,Fi form a pair of feedback

operators corresponding to Vine

It remains to establish the analogue of (2.11) for

Fi,Fi. Take a € K, and consider b = a + U1a + U2a. Assume

(C+Fi+Fi)b O. According to (2.20) this implies that

(C+F l +F 2 )a = O. Subtracting (2.14) from (2.15) for x = a yields

x x x Note that (VZ-v1)a € L. (Tz-T1)a € K and (VZ-V1)a € L • From

our choice of L and LX we know that X = L $ K $ LX. Thus x x

(V 2-V l )a = O. (T 2-T l )a = 0 and (V 2-V 1 )a = O. But then it

follows that

The last two identities form the analogue of (2.11) for Fi,Fi.

Thus Fi,Fi is an improved pair of feedback operators.D

Page 277: Constructive Methods of Wiener-Hopf Factorization

268 Bart, Gohberg and Kaashoek

11.3. Factorization with non-negative indices

In this section we assume that M n MX ~ (0). and thus

(3.1)

We prove that a transfer function with this property admits a

right Wiener-Hopf factorization with non-negative indices. In

fact. usiqg the incoming characteristics and the associated

feedback operators we can describe explicitly the factors in

the Wiener-Hopf factorization including the diagonal terms.

THEOREM 3.1. Let W be the transfer function of a node

e .. (A.B.C;X.Y) without spectrum ~ the contour r. and assume

that

(3.2) M n MX .. (0).

x where M.M ..!!. the ..E!.!.!. 2.!. spectral subspaces associated

with e and r. Then W admits a right Wiener-Hopf factorization

with respect .!2.. r.

(3.3) A £ r.

with factors given.EL

(3.8) D(A)y {

>..-£ 1 fII.

= (>"-£2) J yj •

y •

y .. Yj (j .. l ••••• s).

Page 278: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

Here F1 ,F 2 : K + Y ~~ improved~.£i. feedback operators

corresponding to a triple

269

of associated incoming data for the node 0 (relative to rand

the points e: 1 ,e: 2 ), and for i = 1,2, and 3 the operator Pi

stands for the projection ~ X onto the i-..!!!.. space in the

decomposition M $ K $ MX along the other spaces_ in this

deco~~itio~.

PROOF. Note that

(3.9)

(3.10)

Let T 1,T 2 : K + K be the incoming operators for 0 corresponding

to the incoming data Vin , and let 2 1 : K + M and 2 2 : K + MX be

the operators defined by (2.9) and (2.10), respectively. Write

A and AX as 3 x 3 operator matrices relative to the

decomposition X = M $ K $ MX :

(3.11) A AX

Then, relative to the decomposition X

(3.12)

o o

Page 279: Constructive Methods of Wiener-Hopf Factorization

270

(3.13)

Bart, Gohberg and Kaashoek

o o

We see that as A and AX, the operators A - BCP 3 + BF 1PZ and X

A + BCP 1 - BFZP Z have no spectrum on r. But then we can use

(3.9) and (3.10) to show that indeed for each A € r the right

hand side of (3.5) is the inverse of the right hand side of

(3.4) and the right hand side of (3.7) is the inverse of the

right hand side of (3.6).

Next, we compute W_(A)-l W(A)W+(A)-l for A € r. Take

A € r. Then

Next

W_(A)-l + C(A-A)-l B +

+ (C+F1PZ)(P1+PZ)(A-Ax)-1(Ax-A)(A-A)-1B

I + C(A-A)-l B - (C+FIPZ)(Pl+PZ)(A-A)-IB

-1 I + (CP 3-F 1PZ)(A-A) B.

W (A)-1 W(A)W+(A)-1 = W_(A)-1 W(A) +

x -1 - (C+F ZP Z)(P 2+P 3 )[A-(A +BCP 1-BF ZP Z] B +

From (3.13) we see that

Page 280: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

and thus

Hence W_(A)-1 W(A)W+(A)-1

Put

A o

y

271

y. (j=1, ... ,s), J

-1 x y € B [M+M 1.

D(A) for A € rand (3.3) is proved.

Since M is the spectral subspace of A corresponding to the part

of a(A) in ~; the operator Ao has its spectrum in ~;. In a

similar way a(A:) en;. Next, observe that

-1 (P2+ P3)(A-A) (P 2+P 3 )

-1 (P 2+P 3 )(A-A o ) (P 2+P 3 ), A € r.

It follows that w+(.) has an analytic continuation defined on + the closure of nr • From

A € r,

we may conclude that W_(.) has an analytic continuation defined

Page 281: Constructive Methods of Wiener-Hopf Factorization

272 Bart, Gahberg and Kaashoek

on the closure of Or. Using (3.12) (resp. (3.13» it is easy to

see that W_(.) (resp. w+(.)-l) has an analytic continuation - + defined on the closure of Or (resp. Or). It follows that (3.3)

is a right Wiener-Hopf factorization and the theorem is proved.O

III. OUTGOING CHARACTERISTICS

111.1. Outgoing bases

Continuing the discussion of the preceding section, we

now assume (instead of (1.1)in Ch.II) ) that

(1. 1)

Under this assumption we shall investigate the structure of

M n MX.

We begin by considering the sequence of outgoing

subspaces of e. These subspaces are defined by

(1.2) Kj = M n MX n Ker C n Ker CA n ••• n Ker CA j - 1 ,

where j = 0,1,2, •••• For j = 0 the right hand side of (1.2) is x interpreted as M n M • The spaces Kj do not change when in

(1.2) the operator A is replaced by AX. Clearly

KO ~ K1 ~ K2 ~ ••• with not all inclusions proper because of

(1.1). Let a be the smallest non-negative integer such that

K a

K a

Ka+1 • We claim that Ka = (0). To see this, note that

M n MX n Ker(CIA) and apply the following lemma.

LEMMA 1.1. The space M n MX n Ker(CIA) = (0).

PROOF. Put Z = M n MX n Ker(CIA). Then Z is a finite

dimensional subspace of X, invariant under both A and AX.

Suppose Z is non-trivial, and let ~ be an eigenvalue of the

restriction AIZ of A to Z. Since A and AX coincide on Z, we

have that ~ is an eigenvalue of AXlz too. But then

~ € a(AIM) n a(AxIM x ), which contradicts the fact that a(AIM)

and a(Ax IMx) are disjoint. 0

Page 282: Constructive Methods of Wiener-Hopf Factorization

Explicit iactorization 273

Let E be a complex number. The spaces Kj do not change

if in (1.2) the operator A is replaced by A-E. Hence

(1. 3) j 0.1.2 •••••

We shall use these identities to construct a system of vectors

(1. 4) k l, ... ,a.; J

j 1 , ••• , t

with the following properties

(3) the vectors d jk • k

basis of Km.

l ••••• a.-l. j J

l, ... ,t;

Such a system of vectors we shall call an outgoing basis for e (with respect to the operator A-E).

For all outgoing bases for e the integers t and

al ••••• a t are the same and independent of the choice of E. In

fact one sees from (3) that

(1. 5) t

( 1.6) j l, ... ,t.

We call al ••••• a t the outgoing indices of the node 9. Observe

that a l = a. The outgoing indices are also determined by

(1. 7) #{jll < j < t a = k} = dim(K 11K) - dim(K IK +1). = = • j p- p p p

a j t For an outgoing basis {d jk }k=l.j=1 the following

Page 283: Constructive Methods of Wiener-Hopf Factorization

274

holds:

Bart, Gohberg and Kaashoek

(3a) the vectors d jk , k

basis of M n MXj

I, ••• ,aj,j=I, ••• ,t, form a

(3b) the vectors d jk , k = I, ••• ,a.-I,a. , 2, form a x J J

basis of M n M n Ker C.

Conversely, if (1.4) is a system of vectors such that (1), (2),

(3a) and (3b) are satisfied, then the system (1.4) is an

outgoing basis for e. This is readily seen by using (1.3).

Now, let us make an outgoing basis. The construction

is based on (1.3) and uses a method employed in [GKS), Section

1.6. Define

t m

Obviously t a

t - t = t - t ., we have nj a a-j a-J

O,I, ••• ,a.

dim(K j!K .+1) and a- a-J

j = l, ••. ,a.

In particular, n i = dim Ka _ I • Let dII; ••• ,dnI,I be a basis of

Ka - I • For i = I, ••• ,nI write d i2 = (A-e)d il • From (1.3) it is

clear that d I2 , ••• ,dnI ,2 are ve~tors in Ka- 2 , linearly

independent modulo Ka _ I • But then n 2 = dim(Ka _ 2!Ka _ I ) ~ n I , and

we can choose vectors d +1 I, ••• ,d 1 such that n I , n 2 , dII, ••• ,dn2,1,d12, ••• ,dnI,2 form a basis of Ka _ 2 • Put

i = I, ••• ,n l ,

Again using (1.3) we see that d I3 ,···,dn ,3,dn +I,2, ••• ,dn2 ,2

are vectors in Ka _ 3 , linearly independent mOdulo Ka _ 2 • It

follows that n3 ~ n 2 and by choosing additional vectors

d +1 l,···,d 1 we can produce a basis of K 3. Proceeding n 2 , n 3 , a-

Page 284: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 275

in this way one obtains in a finite number of steps an outgoing

basis for 0. Observe that the construction shows that

o = to ~ t1 ~ ••• ~ ta = t. Also a j = k whenever

t k_ 1+1 ~ j ~ t k •

Let the system of vectors (1.4) be an outgoing basis

for 0 with respect to A-€. Define S: M n MX + M n MX be

(1. 8) (S-dd jk d. k+1' J. k = 1 ••••• a .•

J j l, ... ,t.

where d j • a .+ 1 = O. We call S the outgoing operator for 0

associatedJwith the outgoing basis (1.4). With respect to the

basis (1.4) the matrix of S has Jordan normal form with € as

the only eigenvalue. There are t Jordan blocks with sizes

a1·····a t • The next proposition is the counterpart of Proposition

1.2. in Ch. II.

PROPOSITION 1.2. Let €1 and €2 ~ complex numbers. and

let the system (1.4) ~~ outgoing basis for 0 with respect to

A-€l' Put

( 1. 9)

a j t Then {e jk}k=l.j=l ~~ outgoing basis for 0 with respect to

the operator A-€2' Further • ..!!.. 51 and 52..!!.!.. the outgoing

operators associated with the outgoing bases (1.4) and (1.9).

respectively. then 51 and 52 coincide ~ M n MX n Ker C and

a.-1 J Q j u+1

(1.10) (5 1-S 2)e. = L (+1)(€1-€2) d. _. JQ j u=O u J.a j U

PROOF. The proof is analogous to that of Proposition

1.2. in Ch. II. First one shows. by computation. that

k 1 ••••• a.-1. J

Clearly (3) is satisfied with d jk replaced by ejk' 50 the

vectors (1.9) form an outgoing basis for 0 with respect to

Page 285: Constructive Methods of Wiener-Hopf Factorization

276 Bart, Gohberg and Kaashoek

A-£2. The identity (1.10) can be checked by a straightforward

calculation. A similar computation yields

(1.11) k <l. ~ 2. J

In view of (3b), this gives that Sl and S2 coincide on

M n MX n Ker C, and the proof is complete. D

Wj t If the vectors of two outgoing bases {d J. k }k=l,J.=l

Wj t {e } are related by (1.9), then reversely

jk k=l,j=l

(1.12) k-1 k-1 u

1: ( )(£2-£1) e. k-· u=o u J, U

Hence (1.10) may be replaced by

<l.-1 J <lj u+1 1: (+1)(£2-£1) e. _.

u=O U J,<lj U (1.13)

and

As in Section 11.1. we conclude this section with a

remark about the definition of a node with centralized

singularities as

such a node, and

of X2 introduced

given in Section 1.3. Let e = (A,B,CjX,Y) be <l. t <l. t

let {djk}k~l,j=l and {e jk }k:1,j=1 be the bases in property (iii) of a node with centralized

singularities. In the proof of Theorem 3.1 in Ch.I we have

shown (cf. formulas (3.16) and (3.17» in Ch.I. that

k l, ... ,a.-l. J

Using this together with property (iv) of a node with

centralized singularities one easily checks that the basis <l. t

{djk}k~l,j=l is an ~utgOing basis for e relative to the

operator A-£2 and A2 is the corresponding outgoing operator. <l. t

Similarly, {ejk}k=~,j=l is an outgoing basis for e relative to

the operator A2 the corresponding outgoing operator. Note that

the formulas (3.Sa) and (3.Sb) in Ch.I tell us that the two

bases are related as in (1.9) and (1.12).

Page 286: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 277

111.2. Output injection operators related to outgoing

bases

We continue the study of outgoing bases. Again

e = (A,B,C;X,Y) is a node without spectrum on r, the spaces

M,M x are the spectral subspaces associated with e and r, and

£1'£2 are two fixed complex numbers, £1 E n; and £2 E Qr. Throughout this section we assume that

dim(X/M+M X) < 00.

A triple

(2.1)

is called a triple ~ associated outgoing data for the node e (with respect to the contour r and the points £1'£2) if

a j t a j t {d jk}k=I,j=1 and {e jk }k=I,j=1 are outgoing basis for e with

respect to the operators A-£1 and A-£2' respectively, and the

following identities hold true:

(2.2) k-l k-l u

E ( )(£1-£2) d. k- , u=o U J, U k l, ... ,a.;

J

(2.3) j l, ... ,t.

The construction of a triple Vout is clear from the

results of Section 111.1. One starts with an outgoing basis a j t

{d jk}k=I,j=1 for e with respect to the operator A-£I. Then,

according to Proposition 1.2, formula (2.2) defines an outgoing a j t

basis {e jk}k=I,j=1 for e with respect to the operator A-£2.

Since the vectors d jk , k

C we have Cd. Ce., J,a j J,a j

l, ••. ,a.-l, belong to the kernel of J

and hence the vectors zl, ••• ,Zt are

Page 287: Constructive Methods of Wiener-Hopf Factorization

278 Bart, Gohberg and Kaashoek

well-defined by (Z.3). Note that the vectors zl ••••• Zt form a

basis of C[MnMXj. Since BC[MnMXj C M+M x • we conclude that

(Z.4)

To a triple of associated outgoing data Vout there

corresponds in a natural way two outgoing operators. By 51 we

denote the outgoing operator corresponding to the first

outgoing basis in Vout • and 5Z denotes the outgoing operator

corresponding to the second basis in Vout • Both 51 and 5 Z act

on the space M n MX.

Two operators Bl • BZ : Y -+- MnM x are called a ~ ~

output injection operators corresponding ~ Vout if for some

projection p of M+M x onto MnM x the following holds:

(2.5) Axx 5 B C K - IX + 1 x E er p

(2.6)

Note that all vectors appearing in (2.5) and (Z.6) are in M+M x

and hence belong to the domain of the projection p. By

substracting (2.5) from (2.6) one obtains that

(2.7) (x EM n M x) •

To construct such a pair of output injection operators x x

we consider operators Gl .G 2 .G 1 .G 2 : Y -+- X which have the

property that

(Z.8) (A-e: l )d .• x x

GI Z j G l z j (A -e: l )d. JCt j J Ct j

(A-e: 2 )e. • x x

( 2.9) G2 z j G2 z j (A-e: 2 )e. J Ct j J Ct j

for j I ••••• t. Since the vectors zl •••• 'Zt are linearly

Page 288: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 279

independent such operators exist. Note that

(2.10) Ax - SIx - G 1 Cx

(2.11) Ax - S 2 x - G 2 Cx = 0 (x € M n M x) •

Furthermore. (2.10) and (2.11) remain true if A is replaced by x x x

A • Gl by G1 and G2 by G2 • Now let IT be any projection of X

onto M n MX. and let p be the restriction of IT to M+M x • Put

(2.12)

Then B1 and BZ have the desired properties.

There is a lot of freedom in the choice of the

operators B1 .B 2 • and the properties of B1 and BZ can be

improved by specifying the projection p and the operators Gl • x x

G2 • G1 and G2 • For example one can prove that there exists a

pair of output injection operators B1 .B 2 such that

(2.13)

(2.14)

(Z.15) p(B-B 1-B Z)y = O.

where p is a projection of M+M x onto M n MX and YO is a

complement of sp{zl ••••• Zt} in B- 1 [M+M X j. Such a pair 8 1 .B 2 we

shall call an improved pair of output injection operators. We

shall obtain the existence of an improved pair of output

injection operators as a corollary of a more general theorem

which concerns the intertwinning of outgoing and incoming data

and which will be proved in the next section. The projection p

appearing in (2.13), (2.14) and (2.15) we shall refer to as the

projection corresponding ~ the improved pair ~ output

~~ctio~ operators B1.B 2 •

Page 289: Constructive Methods of Wiener-Hopf Factorization

280 Bart, Gohberg and Kaashoek

The term output injection operator is taken from

systems theory; in general it refers to an operator which maps

the output space into the state space. Roughly speaking, the

notion of a (resp. an improved) pair of output injection

operators may be viewed as the dual of the notion of a (resp.

an improved) pair of feedback operators.

111.3. Factorization with non-positive ~dices x

In this section we assume that M+M = X, and we prove

that a transfer function with this property admits a right

Wiener-Hopf factorization with non-positive indices. Moreover,

using the outgoing data and the associated output injection

operators we can describe explicitly the factors in the

Wiener-Hopf factorization

THEOREM 3.1. Let W be the transfer function of a node --- -----e = (A,B,C;X,Y) without spectrum.£.!!.. the contour r, and assume

that

( 3.1) X x

M+M ,

where M,M x k the ~~ spectral subspaces associated .&.£.!:!... e and r. Then W admits a right Wiener-Hopf factorization with

respect ~ r.

(3.2) A E r,

with factors given ~

( 3.3)

(3.4)

( 3.5)

Page 290: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

(3.6)

(3.7)

z = z. (j=I ..... t). J

Z E Y • o

x Here Bl .B 2 : Y + MnM ..!.!.~ improved ~~ output injection

operators corresponding ~~ triple

v out

281

of associated outgoing data for the node e (relative to rand

the points e: l .e: 2 ) • ...!.!. P..!.!. the projection corresponding ~ the

output injection operators Bl .B 2 • then

(3.8) x

and for i = 1.2 and 3 the operator Pi..!.!. the projection ..£!. X

onto the i-th space in the decomposition (3.8) along the other

spaces lE.. this decomposition. Further. YO..!.!. the subspace in

(2.15).

PROOF. Write A and AX as 3x3 operator matrices

relative to the decomposition (3.8):

x All 0 0

A AX x A21

X A22

x A23

x A31

x A32

x A33

Since M is the spectral subspace of A corresponding to the

part of a(A) in n;. we have a(A 33 ) en;. Similarly. x +

a(A ll ) c fir'

Let SI,S2: MnM x + MnM x be the outgoing operators

corresponding to the triple Vout ' Then relative to the

decomposition (3.8)

Page 291: Constructive Methods of Wiener-Hopf Factorization

282 Bart, Gohberg and Kaashoek

x All 0 0

(3.9) * Sl 0

0 0 A33

x All 0 0

(3.10) 0 52 * 0 0 A33

To prove (3.9) and (3.10) one uses the formulas (2.13) and

(2.14). From (3.9) and (3.10) it is clear that the operators x

A - (Pl+ P2)(B-B l )C and A + (P 2+p 3 )(B-B 2)C have no spectrum

on r. Thus for ~ E r the right hand sides of (3.3) - (3.6) are

well-defined. Take ~ E r. Let W_(~) and W+(~) be defined by

the right hand sides of (3.3) and (3.5), respectively. Then it

is clear that the operators W (~) and W+(~) are invertible and

their inverses are given by the right hand sides of (3.4) and

(3.6), respectively. -1 -1

Next, we compute W_(~) W(~)W+(~) for ~ E r. Take

~ E r. Then

Thus

W_(~)-IW(A) = W_(A)-1 + C(A-A)-I B +

-I -1 + C[A-(A-(P I+p 2 )(B-B 1)C] [A-(P 1+p 2 )(B-B 1)C-A](A-A) B

W_(~)-I+C[A-(A-(Pl+P2)(B-Bl)C]-IB

-1 I + C[A-(A-(P 1+P2)(B-B 1)C] (P3B+P2Bl).

W_(A)-IW(~)W+(~)-1 = W_(~)-IW(~)-C(A-Ax)-I(P2+P3)(B-B2) +

-1 -C[~-(A-(Pl+P2)(B-Bl)C] •

Page 292: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

W_(A)-l W(A) - C[A-(A-(P1+ P2)(B-B 1)C]-I(P2+ P3)(B-B 2 )

-1 I - C[A-(A-(P 1+p 2 )(B-B 1 )C] P2 (B-B 1-B 2 ).

Now use (3.9) and P2 = p. One sees that

l, ... ,t,

Here we used (2.15) and the fact that (cf. (2.7) and (2.3»

p(B-B 1-B 2 )Zj = (S2-S1)eja., j = 1, ••• ,t. J

-1 -1 It follows that W_(A) W(A)W+(A) = D(A) for A € rand (3.2)

is proved.

Since Im(P 1+p 2 ) = M, the operator function W_(.) has an

analytic continuation defined on the closure of n;. From

Im(P 2+p3) = MX it follows that w+(.)-l has an analytic + continuation defined on the closure of nr' Introduce

A o

283

Obviously, a(Ao ) c n; and a(A:) c nr • From (3.4), (3.5), (3.9)

and (3.10) one sees that

But then it is clear that W_(.)-1 and w+(.) have analytic

continuations defined on the closures of n; and n;,

Page 293: Constructive Methods of Wiener-Hopf Factorization

284 Bart, Gohberg and Kaashoek

respectively. Thus (3.2) is a right Wiener-Hopf factorization

and the theorem is proved. 0

For the factorization formulas in Theorem 3.2 it is

important to know the construction of an improved pair of

output injection operators. For the general case we postponed

this construction to Section IV.I. If X = M+Mx, which is the

case considered here, then the construction of an improved

pair of output injection operators is less involved and goes

as follows.

Let

and let YO be a closed linear complement of span {zI, ••• ,Zt}

in Y. Since dim(MnM x ) < ~ and X = M+Mx, there exist bounded

linear operators H: YO + M and HX : YO + MX such that x x x

By = Hy - H Y for each y € Y. Let G1 ,G 2 , G1 and G2 be the

linear operators from Y into X defined by (2.8), (2.9) and

(3.11)

It follows that

(u 1,2) •

Consider the operators

Kx x x I x x x = A -GIC M : M + M •

Let N be a closed linear complement of MnM x in M. The 2x2

operator matrix representation of K relative to the

Page 294: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 285

decomposition M x N $ (MnM ) has the following form

(3.1Z) K

x xI Here A = nA N, where n is the projection of X onto N along o x

M • To prove (3.1Z) one uses (Z.ll) and the fact that x x x

A - GZC = A - GZC. From the definition of A it is clear x + x 0

that a(Ao ) C Gr , and hence MnM is the spectral subspace of K

corresponding to the part of a(K) inside G-. In an analogous x p x

way one shows that MnM is the spectral subspace of K x +

corresponding to the past of a(K ) inside Gr. Define

p (x) Ix - 2~i ! O-K)-l xdA ,

2;i f (A-K x )-l xdA , x € MX. r

X € M,

Clearly, p is a projection of X onto MnM x• Put x

B1 = -pG 1 and B2 = pG Z • Then (2.13)-(2.15) hold true, and

hence B1 ,B Z is an improved pair of output injection operators

associated to Vout •

IV. MAIN RESULTS

IV.l. Interwinning relations for incoming and outgoing

data

Let F 1 ,F 2 : K + Y be an improved pair of feedback

operators corresponding to a triple of associated incoming data

Vine As before, T 1 ,T Z: K + K denote the incoming operators

associated with Vine Further, Vout is a triple of associated x x

outgoing data with outgoing operators 8 1 ,8 2 : M n M + M n M •

We say that an improved pair B1 ,B 2 : Y + M n MX of output

injection operators corresponding to V out is coupled to the x x

pair F 1 ,F Z if there exist operators L,L : K + M n M such that

Page 295: Constructive Methods of Wiener-Hopf Factorization

286 Bart, Gohberg and Kaashoek

the following interwinning relations hold true:

(3)

(4)

for x € K. Here p is a projection of M + MX onto MnM x

corresponding to the pair B1.B Z' and the operators x

Zl'ZZ: K + M+M are defined by (Z.9) and (Z.10)in Ch.II,

respectively.

THEOREM 1.1. Given ~ improved ~ F l' F Z ~ feedback

operators corresponding.!£.~ triple Din' ~ triple Dout has ~

improved ~ B1 ,B Z of output injection operators coupled to

the ~ F1.F Z•

PROOF. Let Din and Dout be given by (Z.l) in Ch.II and

(2.1) in Ch.III. respectively. By applying (2.4) in Ch.II and

(Z.4) in Ch.III we see that there exists a closed subspace YO

of finite codimension in Y such that

( 1. 1) Y

(1. Z)

Since MnM x is a finite dimensional subspace of M+M x , there

exist bounded linear operators H: YO + M and HX : YO + MX such x

that By = Hy - H Y for each y € yO. x x

Now, let us return to the operators G1 , GZ ' G1 and GZ appearing in (Z.8) and (Z.9) in Ch.III. So far the action of

these operators is prescribed on the vectors zl ••••• Zt only.

Using the operators Hand HX introduced in the preceding

Page 296: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 287

x x paragraph we fix the action of the operators G1 • G2 • G1 and G2

-1 x on B [M+M 1 by setting

(1. 3) x

H y. Y € YO.

Next. recall that (cf. formula (2.7) in Ch.II).

( 1. 4)

Thus each y € sp{yl ••••• y s} can be written as y = (C+F 1+F 2 )x

for some x € K. Further. by subtracting (2.10) from (Z.9) in

Ch.II one sees that

( 1. 5)

We use these connections to extend the definitions of the x x

operators G1 • GZ' G1 • GZ to all of Y by setting

(1. 6)

(1. 7)

for y = (C+F 1+F Z)x. x € K. Since the pair of feedback operators

F 1 .F 2 is an improved pair. (2.11) in Ch.II. holds true and

hence for a given y the definitions of (1.6) and (1.7) do not

depend on the special choice of x. x x

For Gl • GZ' Gl and GZ defined in this way the

following holds true:

(1. 8) x x

B = G - Gl GZ - GZ; 1

( 1. 9) 1m Gl c M. 1m x

Gl c K+M x

(1.10) 1m GZ c M+K. 1m x x

GZ c M . Let n be a projection of X onto M n MX such that

Page 297: Constructive Methods of Wiener-Hopf Factorization

288 Bart, Gohberg and Kaashoek

nx 0 for x € K. Consider

o , X € M;

n(AXx x - 5Znx - GZ Cx) , x € M,

n(Axx x x VZx - GZCx - GZF 1x), x € K,

0 , X. € x

M •

Because of (Z.10) and (Z.ll) in Ch.III the vectors V1x and VZx

are well-defined for x € M n MX. Observe that

V1,V Z : X + M n MX are bounded linear operators. 50 there exist

unique operators U1 ,U Z : X + M n MX such that

(1.11) o

(LIZ)

Introduce the following operators

(1.13)

We claim that B1 ,B Z is an improved pair of output injection

operators coupled to the pair F 1 ,F Z•

To see this, put

(1.14) px X € M+M x •

Note that p is a projection of M+M x onto MnM x • Let us check

that formulas (2.13), (2.14) and (2.15) in Ch.III hold true.

Take x € MX. Then

Page 298: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

which proves (2.13) in Ch.III. Similarly, for x € M

p(Ax-S px-B Cx) ~ 2 2

and hence (2.14) in Ch.III holds true. To prove (2.15) in

Ch. III, take y in yO. Then

p(By-B y-B y) ~ 1 2

So (2.15) in Ch.III holds true. It follows that B1 ,B 2 is an

improved pair of output injection operators with p as the

corresponding projection.

Next we prove the coupling. Define T,T x : K + M n MX

by setting

(1.15) (x € K).

289

With p,T and TX defined by (1.14) and (1.15), respectively, we

have to check the properties (1) - (4) listed in the beginning

of this section. Take x € K. Note that

Page 299: Constructive Methods of Wiener-Hopf Factorization

290 Bart, Gohberg and Kaashoek

Recall that Zlx = Ax - T1x + BF1x € M (cf. formula (2.9) in

Ch.II). It follows that

So (1) holds true. To prove (2), note that

Next, note that

x SIT X = SlUlx = U1Ax + V1x = (n+U1)Ax - nG1Cx - nGIF2x

(n+u1)Axx - (nG~-UIB)CX - nGIF2x

(n+U1)Axx + B1Cx - nG 1F2 x.

Page 300: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 291

x x Recall that Z2x = A x - TZx - BFZx E M (cf. formula (2.10) in

Ch.II). It follows that

which shows that (3) holds true. Finally, to prove (4) we note

that

IV.Z. Dilation to a node with centralized

singularities

In this section 0 = (A,B,C;X,Y) is a node without

spectrum on the contour rand M,M x is the pair of spectral

subspaces associated with 0 and r. We assume that

Let F 1 ,F 2 : K + Y be an improved pair of feedback operators

corresponding to a triple of associated incoming data Din of

Page 301: Constructive Methods of Wiener-Hopf Factorization

292 Bart, Gohberg and Kaashoek

0. As before, T1 ,T Z: K + K denote the incoming operators

associated with Vin' Further, Vout is a triple of associated x x

outgoing data with outgoing operators SI'SZ: M n M + M n M ,

and B1 ,B Z : Y + M n MX is an improved pair of output injection

operators for Vout which is coupled to the pair F 1 ,F Z' Put

and introduce the following operators

SI 0 0 0 0

0 Sz BZC BZF 1 BZF Z Bl BZ

( Z. 1) A 0 0 A BF 1 0 B B 0

0 0 0 Tl 0 0 0 0 0 0 TZ

(Z.Z) C [0 o C

THEOREM Z.1. The node 0 ----- (A,B,C;X,Y) is a dilation of

o and 0 ~~ node with centralized singularities.

To prove Theorem Z.1 we have to show that X admits a

decomposition

(Z.3) x

A X A

such that the operator matrices of A, A , Band C relative to

this decomposition have certain special properties (see Section

1.3). To describe the spaces appearing in (Z.3) let p be a

projection of M+M x onto M n MX corresponding to the pair B1 ,

BZ ' Further, let t,t x : K + M n MX be such that the coupling

properties (1) - (4) listed in the beginning of Section IV.l

are fulfilled. Then

Page 302: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

o pm+Tx

m+x Ixe:K,me:M},

u

o o

x

o

x Ixe:K},

x

x

x x pm +T x

o x

m +x

o

x

To see that formula (2.3) holds true with XI 'X 2 'X 3 and X4 defined as above, put N = M n Kerp and NX = MX n Kerp.

Obviously,

(2.4) x

For i = 1,2,3,4 we let Qi denote the projection of X onto the

i-th space in the decomposition ( 2.4) along the other spaces

this decomposition. Now consider

xl 0 u 0 x x

pm +T c

x 2 pm+Ta u 0 0

(2.5) Xo m+a + u + b + x

m +c

x3 a l ~ b 0

x 4 0 b c

293

in

Page 303: Constructive Methods of Wiener-Hopf Factorization

294 bart, Gohberg and Kaashoek

Here m € M, mX X € M , a, band c are in K and u € M n MX, and

these elements we have to find. Clearly

xl u + x

pm + T x c,

x 2 pm + Ta + u,

Xo m + u + a + b x

+ c + m ,

x3 a + b,

x 4 b + c.

Thus

Observe that m-pm € Nand mX_pm x x x X € N • Further pm+u+pm € MnM

and a+b+c € K. But then one easily checks that the following

identities hold true:

a = Q3x O - x 4 '

c = Q3x O - x 3 '

u = xl + x 2 - Q2x O x - Ta - T c,

m QlxO + x 2 - u - Ta,

x x m Q4 Xo + Xl - u - T c.

Thus given the left hand side of (2.5), the elements in the

right hand side of (2.5) are uniquely determined. It follows

that (2.3) holds true.

Let Q. be the projection of X onto the space XJ• along A J

the spaces Xi(i*j).

LEMMA 2.2. ~e ~

Page 304: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

u(y) 0

u(y) 0

Q2 By u(y) Q3 By bey)

0 bey)

0 bey)

where

r'I:")' for y Cx. x€MnM x , (2.6) u(y) for y € YO $ sp{y1····· y s};

{(T,~Tl)' for y € sp{z1····'Zt} $ YO'

(2.7) bey) for y (C+F 1+F 2 )x. x € K.

In Earticular. rank Q2 B = s and rank Q3 B t.

PROOF. In (2.6) and (2.7) the vectors y1 ••••• y s and

z1 ••••• Zt come from the given triples Vin and Vout '

respectively. The space YO is as in (2.15) in Ch.III. So

(2.8) Y

From the proof of the decomposition (2.3) it is clear that

u(y)

bey)

x Bly + B2y - Q2 By - TQ3By - T Q3By,

- Q3 By • -1 x

T a key € s p { z l' ...• Z t} $ YO = B [ M+ M 1. The n

2StS

x By € M+M • So Q2By = pBy and Q3By = O. From the last equality,

the first equality in (Z.7) is clear. Also. we see that

u(y) = p(Bl+BZ-B)y. Hence, by formula (Z.7) in Ch. III. the

first equality in (Z.6) holds true. Also. by (2.15) in Ch. III

we have u(y) = 0 for y € yO.

Next take y = (C+F l +F 2 )x with x € K. So Y is an

arbitrary element of sp{yl •••• 'Ys}. By subtracting (Z.lO) from

(2.9) in Ch. II. we see that

( Z. 9) By

Page 305: Constructive Methods of Wiener-Hopf Factorization

2~6 hart, Gohberg and Kaashoek

Since Zlx € M and Z2x € MX, this implies that

(2.10)

From the second equality in (2.10) the second equality in (2.7)

is clear. From the properties (1) - (4) for the coupling

operators (see the beginning of Section IV.l) we see that

But then u(y) = 0, and (2.6) is proved too.

From (1.10) and (1.11) in Ch.III we know th~tArank(SI­

S2) is equal t, and hence the same is true for rank Q2B. Also,

by (1.11) in Ch. II, rank(T I -T2 ) = s, which implies

that Q3 B = s.D

From (2.4) in Ch.III and (2.8) in Ch.II it is clear

that

(2.11) rank CQZ t, rank CQ3 s.

As usual AX

we write A A - BC. Note that

SI 0 -B I C -B I FI -B 1 F2 0 S2 0 0 0

AX 0 0 AX 0 A = -BF 2 0 0 0 Tl 0

0 0 0 0 T2

LEMMA 2.3. The spaces Xl and X4 .~ invariant under A A X

A and A , respectively. More precisely for x € K, m € M and

mX € MX we have ----

Page 306: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 297

(Z.lZ)

(Z.13)

A

AX A

o 0

pm+tx p(Am+Z 1x)+tT l x

m+x

x

o

X X pm +t x

o X

m +x

o x

Am+Z 1x+T 1x

T 1x

o

X X X peA m +ZZx)+t TZx

o X X

A m +ZZx+TZx

o TZx

PROOF. To prove (Z.12) we only have to show that

We use the properties (1) and (2) of the coupling operators and

formula (2.14) in Ch.II!. So

S2 pm + S2 tX + B2 Cm + B2 Cx + B2 F 1x

pAm + TT 2 x + B2 (C+F 1+F 2 )x

pAm + pZ1x + TT l x.

and (2.14) is proved.

To prove (2.13) one has to check that

(2.15)

But this is easily done by applying the properties (3) and (4)

of the coupling operators and formula (Z.13) in Ch.II!. 0

LEMMA 2.4. For u € M n MX one has

Page 307: Constructive Methods of Wiener-Hopf Factorization

298 hart, Gohberg and Kaashoek

u Sl u 0

u Sl u p(Au-S 1u)

(2.16) A u Sl u + Au-S 1u

0 0 0

0 0 0

x u S2 u p(A u-S 2u)

u S2 u 0

(2.17) AX x u S2 u + A u-S 2 u

0 0 0

0 0 0

.IE!. particular,

(2.18)

PROOF. To prove (2.16) one has to check that

pAu = S2u + B2 Cu for u € M n MX. But this is clear from (2.6)

in Ch.III. To check (2.17) one applies formula (2.5) in Ch.III

in a similar way.D

LEMMA 2.5. For x € K one has

0

0

(2.19) A x

x

x

(2.20)

In particular,

o o x

x

x

0

0

T2x

T2 x

T2 x

0

0

T1x

T1x

T1x

0

pZlx+T(TI-T2)x

+ Zl x+(T 1-T 2 )x

(T 1-T 2 )x

0

p Z 2 x+ T (T 2 - T 1 ) x

0

+ Z2 x+(T 2-T 1 )x

0

(T2- Tl)x

Page 308: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 299

(2.21)

PROOF. To prove (2.19) one applies properties (1) and

(2) of the coupling operators and formula (2.9) in Ch.II. For

(2.20) we need properties (3) and (4) of the coupling

operators and formula (2.10) in Ch.II.D

From the expressions of A and AX it is clear that both AX

operators have no spectrum on r. By P (resp. P ) we denote the A AX

Riesz projection of A (resp. A ) corresponding to the part of + the spectrum in Or. We have

I 0 0 0 0 0 0 X

PI 0 x

P2

0 0 PI P2 0 0 I 0 0 0

0 0 0 AX

0 0 I_p x 0 x

P P P3 I-P P3

0 0 0 I 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 I

Here P (resp. pX) is the Riesz projection corresponding to the

part of cr(A) (resp. cr(A x » in 0;.

(2.22)

(2.23)

(2.24)

(2.25)

LEMMA 2.6. The following identities hold ~:

P 3x = (I-P)x,

A2 PROOF. Since P P, we have

o.

(x € K);

(x € K).

Further, AP PA yields the following identities:

Page 309: Constructive Methods of Wiener-Hopf Factorization

300 Bart, Gohberg and Kaashoek

(2.26)

(2.27) - (I-P)BF l ,

(2.28)

According to formula (2.14) in Ch.III we have

(2.29)

Since the spectra of AIM and S2 are disjoint and PIP = PI' we

see from (2.26) and (2.29) that PI = pP. Next, we use (2.9) in

Ch.II to show that

(2.30) A(I-P)x - (I-P)T x = - (I-P)BF x I I

(x € K).

Further, observe that the spectra of AIKer P and TI are

disjoint. So, by comparing (2.27) and (2.30) we may conclude

that I - P = (I-P)P 3 • But PP 3 = O. Hence P3 = I-P. Now

take x € K and rewrite (2.28) as follows:

S2 P2 x - P2 Tl x pPBF1x - B2 F I x - B2 C(I-P)x

- B2 (C+F I )x + pPBF 1 x + B2 CPx

B2 (C+F I )x + pPBF1x + pPAx - S2 PPx •

It follows that

Now compare this last identity to property (1) of the coupling

operators and use that the spectra of Tl and S2 are disjoint.

One sees that LX = P 2x + pPx for x € K, and hence

P 2x = LX - PIX for x € K. We have proved the first equalities

in (2.22), (2.23) and (2.24). ~x 2 ~x

From (I-P) = I-P we see that

Page 310: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 301

(Z.31) P Xpx = 0 1 '

(I_pX)A X yields the following identities:

(Z.Z3) pXAx _ x x SI P l - B1C(I-P ),

1

(Z.33) AX x P3 x

P3TZ x

P BF z'

(Z.34) x x x x

SI PZ - PZT Z - P 1 BF Z + B1F Z + B1CP 3 •

By comparing (Z.3Z) with (Z.13) in Ch.III and using

disjointness of spectra we see that P~ = p(I_p x ). In a similar

way we obtain from (Z.10) in Ch.II and (Z.33)

that P; = pXx for x € K. Next take x € K and rewrite (Z.34) as

follows:

x x Sl P Zx - PZTZx = - P(I-px)BFZX + B1F Zx +

Bl (C+F Z)x -x x

p(I-P )BFZx - B1C(I-P )x

It follows that

x x x x Sl(P Z+p(I-P »x - (PZ+p(I-P »TZx

= B1 (C+F 2 )x + p(Axx-Tzx-BFzx).

x B1CP x

Next compare this to property (3) of the coupling operators.

Since the spectra of Sl and T2 are disjoint, it follows that

TXX = P~x + p(I_px)x for x € K, and hence the second equality

in (Z.23) holds true.D

LEMMA Z.7. The node e has ~ spectrum..£.!!. r and for the"

spectral subs paces ~,~x associated with e and r ~~

(Z.35) M

Page 311: Constructive Methods of Wiener-Hopf Factorization

302 bart, Gohberg and Kaashoek

PROOF. According to Lemma 2.6 we have

Xl xl a X2 xl p(PXO-Px3-x 1 )+Tx3

P Xo xl + PXO-Px 3-x 1+x 3

x3 0 x3 x 4 0 0

from which the first identity in (2.35) is clear. To prove the

second identity in (2.35) one uses

Xl x 2 x x pm +T x 4

x 2 X2 a (I_p X) x

Xo x 2 + m +x4

x3 0 0

x 4 0 x 4

where mX

PROOF OF THEOREM 2.1. From Lemmas 2.3, 2.4 and 2.5 it

is clear that relative to the decomposition (2.3) the operators A

AX A and can be written in the following form:

Al * * * AX Al 0 0 0

0 Sl 0 * * S2 0 0 A

0 0 T2 * AX

* 0 T1 a AX

a a a A4 * '" '" A4

Properties (iv) and (vi) of a node with centralized

singularities hold true for e because of Lemma 2.2 and formula

(2.11). By formula (2.16) and (2.17) the operators Sl and S2

have the desired action and hence property (iii) holds true.

Similarly, property (v) is satisfied by the formulas (2.19) and

(2.20). In both cases the desired bases in X2 and X3 are

Page 312: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

induced by the outgoing and incoming bases of M n MX and K,

respectively. It remains to prove the spectral properties (i)

and (11).

From the first identity in (2.35) it is clear that

o o

A + A

Since a~Sl) c ~r and a~T2) en;, we may conclude

that a(A 1 ) c nr and a(A4 ) en;. Similarly, using the second AX AX

identity in (2.35), one shows that Al and A4 have the desired

spectral properties.D

IV.3. Main Theorem and corollaries

303

THEOREM 3.1. Let W be the transfer function of a node -- ---- ----e = (A,B,C;X,Y) without spectrum ~ the contour r, and assume

that

x where M,M ..!!. the ~..£!. spectral subspaces associated

with e and r. Then W admits a right Wiener-Hopf factorization

with respect to r,

( 3 • 1 ) ~ € r,

with factors given &

-1 x W_(~) = I + C(~-A) (Q1B+Q2B+r Q3B-B1)

+ C(~-A)-lZl(~-T1)-IQ3B -1

+ (C+F1)(~-T1) Q3B,

Page 313: Constructive Methods of Wiener-Hopf Factorization

304

( 3.2) DO )Y

Bart, Gohberg and Kaashoek

I - (C+FIQ3)(Ql+Q3)(A-Ax)-IB

- C(A-Sl)-IVl(Ql+Q3)(A-Ax)-IB

-1 x - C(A-S l ) (QZB+T Q3 B- Bl)'

I + (C+F ZQ3)(Q3+Q4)(A-A)-I B

+ C(A-SZ)-IVZ(Q3+Q4)(A-A)-IB

-1 + C(A-S Z ) (QZB-BZ+TQ3B).

x -1 1- CO-A) (QZB-BZ+TQ3B+Q4B)

- C(A-Ax)-IZZ(A-TZ)-IQ3B

-1 - (C+FZ)(A-T Z ) Q3 B •

A-e: 1 w. (~) J yj • Y Yj (j=I ••••• s).

Z

Y Y E Yo' A- e: Z a. (~) J Zj • Y z. (j=I ••••• t).

1 J

Here T l • T2 : K ... K ~ incoming operators corresponding -!£.. ~

triple

.£.!.. associated incoming data for e (relative -!£.. r and the points

e: l .e: Z )' and F l .F Z : K ... Y ~.!!E.. improved~.£i. feedback

operators corresponding -!£.. Vine The operators x x. d' SI' SZ: MnM ... MnM ~ outgo1ng operators correspon 1ng -!£.. ~

triple

.£i. associated outgoing data for e. and Bl • BZ : Y ... MnM x _i_s _a_n

improved pair .£!.. output injection operators corresponding-!£..

Page 314: Constructive Methods of Wiener-Hopf Factorization

Explicit iactorization 305

V out ' which ~ coupled.!.£. the ~ F 1 ,F 2 ~ coupling operators

t, t X: K + MnM x • li. p ~ the projection corresponding .!.£. the

output injection operators B1 ,B 2 , then

(3.3) x (M n Kerp) $ (M n MX) $ K $ (M x n Kerp),

and for i = 1,2,3 and 4 the operator Qi stands for the

projection.£!.. X onto the i-th space in the decomposition (3.3)

along the other spaces -.!E.. this decomposition. Further

(3.4) Z1 x Ax - T1x + BF 1 x, x € K,

( 3.5 ) Z2 x x

A x - T2 x - BF 2x, x € K,

(3.6) x x

V 1 x Q2 A x + (B 1-t Q3 B)(C+F 1 Q3)x, x € Im(Q1+Q3)'

(3.7) V 2 x Q2 Ax + (tQ3B-B2)(F2Q3+C)x, x € Im(Q3+ Q4)·

Finally, YO is a subspace of Y such that

PROOF. Let 0 = (A,B,C;X,Y) be the dilation

of e constructed in the previous section. According to Theorem

2.1, the node 0 is a node with centralized singularities. So,

by Theorem 3.1 in Ch.I, the transfer function of 0 admits a

right Wiener-Hopf factorization. But 0 and 0 have the same

transfer function, and thus W admits a right Wiener-Hopf

factorization relative to r. From (3.9) and the proof of

Theorem 3.1 in Ch.I it is clear that the factorization can be

made in such a way that the middle term D(A) in the right hand

side of (3.1) has the desired form (3.2). Theorem 3.1 in Ch.I

applied to e also yields explicit formulas for the factors W_

and W+ in (3.1), namely

(3.8) W 0)

Page 315: Constructive Methods of Wiener-Hopf Factorization

306 hart, Gohberg and Kaashoek

(3.9)

where Q1 and Q4 are the projections of X introduced in the

previous section. We shall use these identities to get the ±1 ±1 desired expressions for W_(A) and W+(A) •

Without further explanation we shall use the notation

introduced in Section 2. Consider the node

0_ = (A_,B_,C_;MeK,Y), where

A : M e K + M e K, A

B : Y + M e K, B [ (Q1+Q2)B-B1+TXQ3B]

Q3 B

C : M 0 K + Y, C = [CM

Here AM denotes the restriction of A to M considered as an

operator from Minto M, and for any subspace L of X the

operator CL denotes theArestr!ct!o~ of C to L. The node 0 is

isomorphic to the node 0 1 = (A1,Q1B,CQ1;X1'Y). Indeed, define

o

m+a

a

o

Then J is an invertible bounded linear operator and one checks A

easily that J is a node similarly between 0 and 0 1 • lt follows -that 0 and 0 have the same transfer function, and thus 1

O-A_)-l B )-1 W 0) I + C Now write (A-A as a 2 x 2 -

Page 316: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

operator matrix and compute the matrix product C_(A-A_)-1 B_.

One finds the desired expression for W (A).

307

- -1 To get the right formula for W_(A) we first compute

AX A - B C • We have

(3.10)

where for any subspace L of X the operator U~ is defined by

X E L.

To see this, take m E M. Then Am E M, and hence Am

and Q3Am = O. It follows that

X X Further, -Q3BCm = Q3 A m = Q3A (Q1+Q2)m. Next, take x E K. Note

that Zlx E M. So (QI+Q2)Z l x = Zlx and Q3ZIx = O. In particular

Q3Ax + Q3BFI = Q3TIx. Further, by property (4) of the coupling x x

operators, BIFlx - T Q3 TIx = -SIT x. Using these identities one

gets

x Zlx - (Ql+Q2)B(C+F 1 )x + B1(C+FI)x - T Q3B(C+F1)x

(Ql+Q2)(Ax-T l x+BF 1X) - (QI+Q2)BCx + BICx

x x - (Ql+Q2)BF 1x + B1F1x - T Q3BCx - T Q3 BFlx

(Ql+Q2+ TxQ3)Axx + B1Cx - TXQ3Ax + B1F1x - TXQ3BFIX

x x UKx - SIT x.

Finally, since Q3Z1x = 0, we have Q3Ax

Page 317: Constructive Methods of Wiener-Hopf Factorization

308 hart, Gohberg and Kaashoek

This prove (3.10).

Note that M

(3.11) M ~ K

x Hence we may write A_ as a 2 x 2 operator matrix with respect

to the decomposition given by the right hand side of (3.11); we

have

(3.12) A> [:1 (Q1+Q3):~(Q1+Q3) ].

where VI is given by (3.6). Take u E M n MX. Then

x and, by formula (2.13) in Ch.III, we see that Q2 A_u Slu.

Since AXu E MX, we have

0,

o.

Next, take x E ImQl. Then Ax E M, and thus Q3Ax

that

Further,

o. ,It follows

Page 318: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 309

Finally, take a € ImQ3 = K. Then Q3Aa = Tia - Q3BFIa by formula

(2.9) in Ch.II. Further, according property (4) of the coupling x x

operators, we have t Tia - SIt a = BIFIa. It follows that

Q AXa = x x x (Q2+ t Q3)A a + BICa - SIt a 2 -

x x x x Q2A a + (BI-t Q3 B)Ca + t Q3Aa - SIt a

x Q2 A a +

x (B 1-t Q3 B)Ca +

x x t T1a - SIt a

x - t Q3 BF 1a

x Since (Q1+Q3)A_a = (Q1+Q3)Aa, the representation (3.12) is

proved.

With respect to the decomposition (MnM x ) e Im(Q1+Q3)

the operators B_ and C have the following matrix

representations:

(3.13) B

(3.14)

Q2 B- B 1+ txQ 3 B ],

(Q1+Q3)B

-1 Since W (A) = I + C (A-A) B, we have -1- x--1 - - x -1

W_(A) = I-C_(A-A_) B_. Now compute (A-A_) as a 2 x 2

operator matrix, using the representation (3.12). Next, using

the formulas (3.13) and (3.14), make the matrix product x -1 x -1

C_(A-A_) B_, and observe that (Ql+Q3)(A-A) (Ql+Q3) = x -1 = (Ql+Q3)(A-A) • In this way one finds the desired formula

for W_O)-I.

Next, we study in more detail the function W+. Define

A+: MX e K + MX e K by setting

Here for any subspace L of X the operator UL is defined by

Page 319: Constructive Methods of Wiener-Hopf Factorization

310 Bart, Gohberg and Kaashoek

(3.15)

(3.16)

[(Q2+Q4)B-B2+TQ3Bj

Q3B

C = [C + MX

It is not difficult to check that the node x

(~+,~+!C±!M !K,Y) is similar to the node

(A4,Q4B,CQ4;x4'Y) with the node similarity given by

x x JX [mc XJ __ J : M $ K + X4 '

x x pm +T c

o x

m +c

o c

which is an invertible bounded linear operator. It follows that

8+ and 8 4 have the same transfer function, and thus W+(A) = -1

I + C+(A-A+) B+.

To get the function W+ in the desired form, we use the x x

fact that M $ K = (M n M ) $ Im(Q3+ Q4). It turns out that with

respect to the decomposition (M n MX) ~ Im(Q3+Q4) the operator

A+ has the following form:

(3.17)

where V2 is given by (3.7). To see this, take u e: M n MX. Then

Page 320: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization

and thus, using formula (2.14) in Ch.III, we see that Q2A+u

S2u. Since Au E M = Im(Ql+Q2)' we have

0,

x x x Next, take x E ImQ4. Then A x EM, and thus Q3A x o. It

follows that

Further,

x Finally, take a E 1m Q3 = K. Then Q3 A a = T2a + Q3BF2a.

Further, by property (2) of the coupling operators, we have

TT 2a - S2Ta = -B 2F 2a. It follows that

Q2UKa-S2Ta = (Q2+ TQ 3)Aa - B2 Ca - S2 Ta x

Q2 Aa + (TQ3B-B2)Ca + TQ 3A a - S2Ta

Q2 Aa + ( TQ 3B- B2)Ca + TT 2a - S2 Ta + TQ 3BF 2a

Q2a + (TQ3B-B2)(C+F2)a = V2a.

Since (Q3+ Q4)A+a = (Q3+Q4)Aa the representation (3.17) is

proved.

With respect to the decomposition

311

(M n MX) ~ Im(Q3+Q4) the operators B+ and C+ have the following

matrix representations:

Page 321: Constructive Methods of Wiener-Hopf Factorization

312 Bart, Gohberg and Kaashoek

(3.18)

(3.19)

-1 Now compute (A-A+) , using the 2x2 matrix representation

(3.17). Next, using the expressions (3.18) and (3.19), make the -1

matrix product C+(A-A+) B+, and observe that -1 -1

(Q3+Q4)(A-A) (Q3+Q4) = (Q3+Q4)(A-A) • In this way one finds

the desired formula for W+(A).

To get the desired formula for W+(A)-l we compute x

A+ = A+ - B+C+. With respect to the decomposition MX e K the x

operator A+ has the following 2x2 operator matrix

representation:

(3.20) o

Here AX is the restriction of AX to MX considered as an MX x x

operator from Minto M • To prove (3.20) one applies (2.13) x x -1 X -lAx

and the fact that A+(J) = (J) A4 • Now compute x -1

(X-A+) , using the representation (3.20). Next, using (3.15) x -1

and (3.16), make the matrix product C+(X-A+) B+. In this way -1

one finds the desired formula for W+(X) .0

COROLLARY 3.2. Let W be the transfer function of a

node e (A, B,C ;X, Y) without spectrum ~ the contour r, and

assume that

x where M,M ..!.!. the ~.£!. spectral subspaces associated with e and r. Put

s = x

d . M+M +ImB l.m ,

M+M x t

x d . MnM l.m-----

x ' MnM nKerC

Page 322: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 313

x k-I # {kls - dimM+M +ImB+ ••• +ImA B < j-I},

M+M x+lmB+ ••• +lmAk- 2B =

x m-2 a = # {mldimMnM x nKercn ••• nKerCAm_ 1 ~ j}.

j MnM nKerCn ••• nKerCA

~ -a l ~ -a 2 ~ ... ~ -at ~ IIlI ~ 1Il2 ~ ••• ~ IllS ~ the right

factorization indices of W with respect to r.

PROOF. Apply Theorem 3.1. Formula (3.2) gives the

factorization indices. They are determined by the formulas

(1.5), (1.6) in Ch.ll and (1.5), (1.6) in Ch.llI, which are

exactly the formulas given above. 0

The particular case when X = M Et MX is also covered by

Theorem 3.1 Note that X = M $ MX means that M n MX = (0) and

X = M+M x • So, if X = M $ MX , then by Theorem 3.1 the function

admits a Wiener-Hopf factorization relative to r and all

factorization indices are zero (cf. Corollary 3.2), and thus

the factorization is a canonical one. Further, in this case,

the projections Q2 and Q3 are both zero and Q4 = I-Q1 is equal x

to the projection of X along M onto M (i.e., Q4 = IT, where IT

is as in Theorem 2.1 in Ch.I). Inserting this information in :1 :1 . the formulas for W_(~) and w+(~) y~elds the formulas for

canonical factorization given in Theorem 2.1 in Ch.I.

We conclude with a few remarks about the case when W

is an mxm rational matrix function which is analytic at ~ and

has the value I at ~. Assume W has no poles and zeros

on r. Then W appears as the transfer function of a finite

dimensional node e = (A,B,C;~n,~m) such that A and AX have no

eigenvalues on r. Since the state space of e is finite

dimensional, it follows that the condition

W

Page 323: Constructive Methods of Wiener-Hopf Factorization

314 Bart, Gohberg and Kaashoek

is fulfilled. Thus W admits a Wiener-Hopf factorization

relative to r, and we can apply Theorem 3.1 to get explicit

formulas for the factors. In this case the diagonal

term D(~) may be written as an mxm diagonal matrix whose

diagonal entries are given by

To get D(~) as a diagonal matrix the factors W_ and W+ have to

be multiplied by suitable constant invertible operators.

The factors W_ and W+ produced in the proof of Theorem

3.1 are transfer functions of nodes with state

spaces M $ K and MX $ K, respectively. Note that M ~ K and

MX e K are subspaces of the state space of e. It follows that

in the rational case the McMillan degree 5(W_) of W_ (i.e., the

minimal dimension of the state space of a node for W_ (see,

e.g., [BGKl], Section 4.2» is less than the McMillan

degree 5(W) of W. Similarly, 5(W+) ~ 5(W). This leads to the

following corollary.

COROLLARY 3.3. An mxm rational matrix function W with

..!!£. poles and zeros 2..!!. the contour r admi ts ..!. Wiener-Hopf

factorization with respect ~ r with factors W_ and W+ that ~

rational matrix functions with McMillan degrees not exceeding

the McMillan degree of W.

PROOF. By applying a suitable Mobius transformation we

may assume that W is analytic at ~ and has the value I

at ~. But then, as explained above, the corollary is a

consequence of (the proof of) Theorem 3.1. 0

[BGK1 ]

REFERENCES

Bart, H., Gohberg, I., Kaashoek, M.A.: Minimal

factorization of matrix and operator functions.

Page 324: Constructive Methods of Wiener-Hopf Factorization

Explicit factorization 315

[BGK2]

[BGK3]

[BGK4]

Operator Theory: Advances and Applications, Vol.l.

Basel-Boston-Stuttgart, Birkhauser Verlag, 1979.

Bart, H. Gohberg, 1., Kaashoek, M.A.: Wiener-Hopf

factorization ~ analytic operator functions and

realization. Rapport 231, Wiskundig Seminarium, Vrije

Universiteit, Amsterdam, 1983.

Bart, H., Gohberg, 1., Kaashoek, M.A.: Wiener-Hopf

factorization and realization. In: Mathematical Theory

of Networks and Systems, Proceedings of the MTNS-83

International Symposium, Beer Sheva, Israel, Lectu~e

Notes in Control and Information Sciences, no. 58 (Ed.

P. Fuhrmann), Springer Verlag, Berlin, 1984, pp.42-62.

Bart, "H., Gohberg, I., Kaashoek, M.A.: Invariants for

Wiener-Hopf equivalence of analytic operator

functions. This volume.

[Br] Brodskii, M.S.: Triangular and Jordan representation

of linear operators. Transl. Math. Monographs, Vol.32,

Providence, R.I., Amer. Math. Soc., 1970.

[CG] Clancey, K., Gohberg, I.: Factorization ~ matrix

functions and singular integral operators. Operator

Theory: Advances and Applications, Vol.3, Basel­

Boston-Stuttgart, Birkhauser Verlag, 1981.

[GF] Gohberg, I.C., Feldman, I.A.: Convolution equations

and projection methods. for their solution. Transl.

Math. Monographs, Vol.41, Providence, R. I., Amer.

Math. Soc., 1974.

[GKI ]

[GK2]

[GKr]

Gohberg, I.C., Krein, M.G.: Systems ~ integral

equations .£!!...!.. half line with kernels depending ~ the

difference ~ arguments. Amer. Math. Soc. Transl. (2)

14, 217-287 (1960).

Gohberg, I.C., Krein, M.G.: Theory and applications of

Volterra operators ~ Hilbert space. Transl. Math.

Monographs, Vo1.24, Providence, R.I., Amer. Math.

Soc., 1970.

Gohberg, 1., Krupnik, N.: Einfiihrung ~ die Theorie

der eindimensionalen singularen Integraloperatoren.

Page 325: Constructive Methods of Wiener-Hopf Factorization

316

[GKS]

hart, Gohberg and Kaashoek

Basel-Boston-Stuttgart, Birkhauser Verlag, 1979.

Gohberg, I., Kaashoek, M.A., Van Schagen, F.:

Similarity ~ operator blocks and canonical forms. I.

General results, feedback equivalence and Kronecker

indices. Integral Equations and Operator Theory 3,

350-396 (1980).

[GL] Gohberg, I.C., Leiterer, J.: A criterion for

factorization ~ operator functions with respect to a

contour. Sov. Math. Dokl. 14, No.2, 425-429 (1973).

[K] Krein, M.G.: Integral equations ~ the half line with

..!. kernel depending .2..!!.. the difference ~ the arguments.

Amer. Math. Soc. Transl. (2)22, 163-288 (1962).

[KFA] Kalman, R.E., Falb, P.F., Arbib, M.A.: Topics ~

mathematical systems theory. New York, McGraw-Hill,

1969.

[KR] Kaashoek, M.A., Ran, A.C.M.: Symmetric Wiener-Hopf

factorization of selfadjoint rational matrix functions

and realization. This volume.

[W] Wonham, W.H.: Linear Multivariable Control. Berlin,

Springer Verlag, 1974.

H. Bart,

Econometrisch Instituut

Erasmus Universiteit

Postbus 1738

3000 DR Rotterdam

The Netherlands

M.A. Kaashoek

Subfaculteit Wiskunde en

Informatica

Vrije Universiteit

Postbus 7161

1007 MC Amsterdam

The Netherlands

1. Gohberg

Dept of Mathematical Sciences

The Raymond and Beverly Sackler

Faculty of Exact Sciences

Tel-Aviv University

Ramat-Aviv. Israel

Page 326: Constructive Methods of Wiener-Hopf Factorization

Operator Theory: Advances and Applications, Vol. 21 © 1986 Birkhauser Verlag Basel

INVARIANTS FOR WIENER-HOPF EQUIVALENCE OF

ANALYTIC OPERATOR FUNCTIONS

H. Bart, I. Gohberg, M.A. Kaashoek

317

Necessary conditions for Wiener-Hopf equivalence are

established in terms of the incoming and outgoing subs paces

associated with realizations of the given analytic operator

functions. Other results about the behaviour of the incoming

and outgoing subs paces under certain elementary operations are

also included.

1. Introduction and main result

Let WI and W2 be continuous functions on a positively

oriented contour r in the complex plan ~ and assume that their

values are invertible bounded linear operators on a complex

Banach space Y. The functions WI and W2 are called (right)

Wiener-Hopf equivalent with respect to r if WI and W2 are

related in the following way:

( 1. 1) A € r,

where W_ and W+ are analytic on the outer domain Or (which

contains 00) and the inner domain 0; of r, respectively, both W

and W+ are continuous up to the boundary and their values are

invertible operators on Y. In case that Y is finite dimensional

and the functions WI and W2 are Holder continuous on the

contour it is well-known (see e.g;, [CG]) that WI and W2 are

right Wiener-Hopf equivalent if and only if WI and W2 have the

same right factorization indices.

Next let us consider the case when the functions WI

and W2 are analytic on r and appear as transfer functions of

nodes 6 1 = (A1,B1,C;X1,Y) and 6 2 = (A 2 ,B 2 ,C 2 ;X 2 ,Y) without

Page 327: Constructive Methods of Wiener-Hopf Factorization

318 Bart, Gahberg and Kaashoek

spectrum on r (see [BGK4], Ch.1), that is, WI and W2 are given

in the realized form

where A : X u u

-1 I + C10-A1) B1 , A € r,

-1 I + C2 (A-A 2 ) B2 , A € r,

+ X , B : Y + u u

X and C : X + Yare bounded linear u u u

operators, the space X is x u

a complex Banach space and the

operators A and A := A -B C have no spectrum on the u u u uxu

contour r. Now, let M and M be the spectral subspaces u u

(1.2) M u

(1. 3) MX u

and assume in

(1.4)

( 1 -1 1m ~ J (A-A) dA), '11"~ r u

1 = Ker(-2'11"i

addition

J O-A x)-l dA ) u ' r

that

codim(M +Mx) < oo u u

( u= 1,2) •

Then it follows from Corollary 3.2 in Ch.IV of [BGK4] that Wl and W2 are right Wiener-Hopf equivalent if and only if

(1.5)

and

(1. 6)

for m = 0,1,2, •••• (Actually, the results of [BGK4] apply to a

sligthly more general situation involving contours on the

Riemann sphere ~ u {oo}.)

Page 328: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence 319

In general, when the finite dimensionality conditions

in (1.4) are not fulfilled, then conditions (1.5) and (1.6) do

not give much information and one may expect that they have to

be replaced by the existence of certain isomorphisms between

the subs paces (or their complements) involved. The next

theorem, which is the main result of this paper, guarantees the

existence of such isomorphisms.

THEOREM 1.1. For u I,Z, let

W (A) u

be the transfer function of a node e = (A ,B ,C ;X Y) without --- ------ u u u u u spectrum ~ the contour r. Suppose WI and Wz ~ Wiener-Hopf

equivalent with respect to r. Let the Riesz projections x

P 1 ,P I and the bounded linear operators 1jI,~: Xl + Xz ~ given..!?L

1 J -1 x 1 x -1 PI Z1li (A-AI) dA, PI Z1fi J (A-AI) dA,

r r 1 J

x -1 -1 -1 'I' Z1li (A-A Z) BZW (A) C1 (A-A I ) dA,

r

J x -1 -1 -1 x

~ Z1li (A-A Z) BZW (A) C1 (A-A 1 ) (I-PI )PldA, r

where W_ is as in ( 1 • 1 ). Th en, for m = 0,1,2, •••

x m-l -1 x m-l M1+M 1+ImB 1+···+ImA l Bl = ~ [MZ+MZ+ImBZ+···+ImAZ BZ]'

x m-l Im~ + MZ + MZ + ImB Z+ ••• + ImA Z BZ XZ•

Page 329: Constructive Methods of Wiener-Hopf Factorization

320 Bart, Gohberg and Kaashoek

We do not know whether in general the existence of

appropriate isomorphims between the subspaces (or their

complements)implies Wiener-Hopf equivalence. There are

indications (see Theorem 111.2.1 in [GKS] which deals with

Wiener-Hopf equivalence of operator polynomials) that a

converse statement is true under the additional condition that

the subs paces are complemented.

Theorem 1.1 is proved in Section 7 and to prove it we

use results from Sections 2,3,5 and 6. In Section 4 we describe

the behaviour of the spaces

(1. 7)

( 1.8)

H m

K m

M + MX + 1mB + ImAB+ ••• + ImAm- 1 B,

x m-1 M n M n KerC n KerCA n ••• n KerCA •

under the operation of dilation. Along the way we obtain a

proof of the only if part of Theorem 2.2 in Ch.I of [BGK4]. For

the type of contour considered here, this theorem can be

formulated as follows.

-1 THEOREM 1.2. Let W( A) = I + C( A-A) B ~ the transf er

function~.!. node e = (A,B,C;X,y) without spectrum ~ the

contour r, and let M and MX ~ the spectral subspaces

( 1. 9)

(1.10)

M = 1m ( __ 1 __ . f (A-A)-I dA ), 2.,n r

Then W admits a (right) Wiener-Hopf factorization with respect

to r 2:.!. and ~ 2:.!.

codim(M+M X) < co.

The if part of this result is proved in [BGK4j. In

fact, the setting there is slightly more general than the one

Page 330: Constructive Methods of Wiener-Hopf Factorization

~iener-Hop1 equivalence

considered here. We shall comment on this after the proof of

the only if part, which will be given in Section 6.

321

Generally speaking, notation and terminology are as in

[BGK4], the paper preceding the present one in this volume. For

the convenience of the reader, we recall a few of the main

definitions, thereby making some of the notions already used

above more precise. A node (or a system) is a quintet

e = (A,B,C;X,Y), where A: X + X, B: Y + X and C: X + Yare

(bounded) linear operators acting between complex Banachspaces

X and Y. In other words A € L(X), B € L(Y,X) and C € L(X,Y).

The node is said to be finite dimensional if both X and Yare

finite dimensional spaces; otherwise it is called infinite

dimensional. The spectrum of the operator A is denoted by a(A)

and the operator function

Wo.)

defined and analytic on the complement of a(A), is called the

transfer function of e. We say that e has ~ spectrum on (the

given contour) r if A and AX := A-BC have no spectrum on r. In

that case one can introduce the spectral subspaces (1.9) and

(l.10). The pair M,M x obtained this way is called the pair ~

The spaces

the spaces

that HO = M

spectral subspaces associated with e and r. by (1.7) are called the incoming subspaces,

by (1.8) the outgoing subspaces for e. Note

and KO = M n MX. (For slightly more general definitions

involving contours on the Riemann sphere, see [BGK4]).

Hm given

Km given

+ MX

An earlier version of the present paper appeared in

the report [BGK2]; the main results were also announced in

[BGK3] •

Page 331: Constructive Methods of Wiener-Hopf Factorization

322 Bart, Gobberg ana Kaashoek

2. Simple nodes with centralized singularities

In this section we study operator functions having the

diagonal form of the middle factor in a Wiener-Hopf

factorization. As in Section 1.2 of [BGK4], we fix complex

numbers £1 in n; and £2 in n;. Let Y be a complex Banach space and consider the

operator function

(2.1) DO.) t ).-£ -a. s ).-£ III

+ ... ( 1) J II + 1 j IIO "). j 1: (~) II., j=l -£2 - j=l £ J

Here II_ t , ••• ,II_ 1 ,II 1 , ••• ,II s are wutually disjoint one

dimensional projections of Y, 1: II. is the identity operator i=-t J

Iy on Y, a 1 , ••• ,a t , 1II 1 , ••• ,lIIs ~re positive integers and

-a1 ~ ••• ~ -at < 0 < 1111 < ••• < Ills (cf. Section 1.3 of [BGK4],

formula (3.9». We allow for t or s to be zero. This simply

means

C j: ~

A-. J

that in

For j a.

J + ~ be

£1 0 1 £1 0 1

0

(2.1) the first

= l, ... ,t, let

given by

. . . . 0

0 0 1 £1

a .+1

or se~ond s~m does not ag~ear.

A~: ~ j + ~ j, B~: ~ + ~ J and

C~ (£1-£2) J [0 ••• 0 1].

a. Then the transfer function O!athe node e~ (A~,B~,C~;~ J,~) has the form «).-£1)/().-£2» j. Identifying ~ with the one­

dimensional subspace ImII . of Y, we obtain a node a -J e-. = (A-.,B-j'C-';~ j,ImII .) whose transfer function is J J J -J

Page 332: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence

Clearly the node OJ has no spectrum on the given contour a. a.

r and ~ J,~ J is the pair of spectral subspaces associated - -

323

with OJ and r. More generally, the outgoing subspaces of OJ are

a. a.-1 a.-1 a. a. ~ J,~ J x(O), ••• ,~x(o) J ,(0) J,(O) J, •••

a. and the incoming subspaces of 8. are all equal to ~ J. It _ J follows that 8. has no incoming indices and only one outgoing

J a. index, namely a .• In fact, the standard basis of ~ J is an J _ _

outgoing basis for OJ with respect to the operator Aj -E 1

( c f. [BGK4]).

Next, for j = 1, ••• ,s, + Wj

and C.: ~ + ~ be given by J

A~ J

l ~ C~

J

• 0

o

B~ J

1 o

+ + + + Wj Then the transfer function o~.the node 8 j (Aj,Bj,Cj;~ ,~)

has the form «\-E 1 )/(\-E 2 )) J. Identifying ~ with the one­

dimensional subseace 1m IT. of Y, we obtain a node + _ + + +. j J . 8. - (A.,B.,C.,~ ,ImIT.) whose transfer function 1S J J J J J

This node has no spectrum on rand

spectral subspaces associated with +

the incoming subspaces of O. are J

W. w. (0) J,(O) J is the pair of

8~ and r. More generally, J

Page 333: Constructive Methods of Wiener-Hopf Factorization

324 Bart, Gohberg and Kaashoek

w. 00.-1 00.-1 w. w. (0) J,~x(O) J , ••• ,~ J x(O),C J,~ J, •••

+ w. and the outgoing subspaces of 9. are all equal to (0) J. It

+ J follows that 9. has no outgoing indices and only

J one incoming w.

index, namely 00 .• In fact, the standard basis of ~ J

incoming basis for e~ with respect to the operator J

(cf.[BGK4]).

J is an + A j -e: 2

Now let us return to (2.1). Note that D(A) can be

written as

t s + D(A) = ITO + I: D~(A)IT . + I: D.(A)IT .•

j=l J -J j=l J J

It follows that D is the transfer function of the "direct sum" - - + +

of the nodes 9 1 , ••• ,9 t ,9 1 , ••• ,9 s and the trivial

node 9 0 = (O,O,O;(O),Imw O) whose transfer function is IImITO

'

More, precisely, D is the transfer function of the node

9 D

BD

CD

Bl

C1 .

A t •

Bt 0

C~ 0

C+ 1

B+ 1

Y -+- XD,

B+ s

X D -+- Y.

. C+ .

s

Page 334: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence 325

Here the matrix representations are taken with respect to the

decompositions

x Clearly 8 D has no spectrum on r and for the pair MD.MD of

spectral subspaces associated with 8 D and r. we have

1& ••• $

More generally. we can get the outgoing and incoming subspaces

of 8 D by talking the direct sums of the corresponding spaces of - - + + the nodes 8 l ••••• 8 t .8 l ••••• 8 s • In a similar wayan outgoing

basis for 8 D (with respect to the operator AD-E l ) and an

incoming basis for 8 D (with respect to the operator AD-E Z) can

be obtained. It follows that

( Z • Z) t dim

al ••••• a t are the outgoing indices of 8 D and wl ••••• ws are the

incoming indices of 0 D• The last two assertions can also be

formulated as

l, ... ,t,

1, ••. ,s.

Page 335: Constructive Methods of Wiener-Hopf Factorization

326 Bart, Gohberg and Kaashoek

Note also that

(2.5)

(2.6)

The node aD constructed above is clearly minimal. Also

it is a simple node with centralized singularities in the sense

of Section 1.3 of [BGK4]. The following result was already

announced in the discussion after the proof of Theorem 3.1 in

Ch. I of [BGK4].

PROPOSITION 2.1. Let D be as above and let the node a be a minimal realization ~ D ~ ~ neighbourhood ~ co.

Then a ~~ simple node with centralized singularities.

PROOF. Since the transfer functions of a and aD coincide on a neighbourhood of co, the nodes a and eD are quasi­

similar (cf. [H]). Now the state space of aD is finite

dimensional, so in fact a and aD are similar. As we

observed, aD is a node with centralized singularities. It

follows that a is aiso of this type.D

For completeness we recall from Section 1.3 of [BGK4]

that the transfer function of a simple node with centralized

singularities has the form of the middle term in a Wiener-Hopf

factorization, i.e., it admits a representation as in (2.1)

3. Multiplication by plus and minus terms

Part of the next theorem was already announced in the

discussion ensuing the proof of Theorem 3.1 in Ch.I of [BGK4].

For simplicity we restrict ourselves here to the case when r is

a positively oriented contour in the (finite) complex plane.

The general case was treated in [BGK2].

Page 336: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence 327

THEOREM 3.1. Let W ~~ operator function defined and

analytic ~ ~~ neighbourhood ~ the contour r and with

values ~ L(Y), where Y ~ ~ complex Banach space. Then, given

a (right) Wiener-Hopf factorization

( 3 • 1 ) Wo.) ).. € r

relative ~the contour r and the points E 1 ,E 2 , with the middle

term D ~ the form (2.1), Le.,

DO.) t )..-E -a. s )..-E W to ( __ 1) J IT + 1 j

ITO +.l. )..-E -J. E (-,:=-) IT., J=l 2 j=l E2 J

(-a 1 ~ -a 2 ~ ••• ~ -at < 0 < wI ~ w2 ~ ••• ~ ws ), there exists

~ node 0 = (A,B,C;X,Y) with the following properties:

(a) 0 ~ ~ node .with centralized singularities relative ~ the

contour r and the points E1 , E2 (~particular 0 has ~

spectrum..£E.. r);

(b) On the contour r the transfer function of 0 coincides with ---W, ~,

Wo.) -1 = Iy + C( Ux-A) B, ).. € r;

(c) If M,M x ~ the ~~ sEectral subsEaces associated with

0 and r, then

(d) The number t ~ negative factorization indices ~ the

Wiener-Hopf factorization (3.1) and the negative

factorization indices -a 1 , ••• ,-a t themselves are given ~

t dim M n MX

.--=.::---

M n MX n KerC

# {mldim M n MX n KerC n ••• n KerCAm- 2

~ j } , a. Ke;CAm- 1 J M n MX n KerC n ••• n

j l, ... ,t;

Page 337: Constructive Methods of Wiener-Hopf Factorization

328 Bart, Gohberg and Kaashoek

~ other words the absolute values ~ the negative

factorization indices ~ just the outgoing indices of 9;

(e) The number s ~ positive factorization indices ~ the

Wiener-Hopf factorization (3.1) and the positive

factorization indices wI' ••• ' OlS themselves ~ given ~

s = dim M + MX+ImB

M + MX

{mldim M + MX+

Ol. J M + MX+

1mB + • •• + ImAm- 1B >=s-j+l},

ImAm- 2 B 1mB + •.• +

j 1, ••• ,s;

~ other words the positive factorization indices ~~

the incoming indices ~ e. The condition imposed on W can also be formulated as

follows: On (a neighbourhood of) r, the function W can be

written as the transfer function of a node (cf. [BGKlj, Section

2.3).

In order to simplify the proof of Theorem 3.1 and

motivated by the definition of Wiener-Hopf factorization, given

in Section 1.2 of [BGK41, we introduce the following

terminology. An operator function W is called a ~ (minus)

function (with respect to the contour r) if W is analytic + - + -on Or (Or)' continuous on the closure of Or (Or) and has

invertible values. If W is a plus (minus) function, the same is

true for W- 1 (given by (W- 1 (A) = W(A)-I).

A related concept can be defined for nodes. We call a node

e = (A,B,C;X,Y) a ~ (minus) node (with respect to r) if the x - +

spectra of A and A are contained in Or (Or). In particular

such a node has no spectrum on r. The transfer function of a

plus (minus) node is obviously a plus (minus) function. In fact

it is even analytic on an open neighbourhood (in ~ u {~}) of + -the closure of Or (Or). Conversely, suppose W is analytic and

takes invertible values on an open neighbourhood U of the

Page 338: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence 329

+ + closure of Qr. In the case considered here where Qr is b?unded

we may assume that U is bounded too, and then W can be written

(on U) as the transfer function of a plus node (cf. [BGKl], + -Ch.II). Similar remarks hold when Qr is replaced by Qr'

provided W(w) I y •

We shall now consider products of the type ° = °_°09+,

where 0+ is a plus node, 9_ is a minus node and 00 is a node

without spectrum on r. Recall from [BGKl] that, if

9 (A_,B_,C_;X_,Y), 9 0 = (AO,BO'CO;XoY) and

9+ (A+,B+,C+;X+,Y), the product ° = °_90 9+ is given by

9 = (A,B,C;X,Y), where X = X_ ~ Xo ~ X+ and

(3.2) A

(3.3) B

(3.4) C =

A

o o

r :; l B+

[C _

Y + X ~ Xo ~ X+'

Co C+ 1 : X _ ~ Xo il X+ + Y.

Since 0_, 00 and 0+ have no spectrum on r, the operator A has

no spectrum on r. The same is true for the operator AX, which

can be written as

( 3.5)

So 9

AX

AX -B C o -

-B C + -

0 AX

0 -B+C O

o o AX +

9_°09+ also has no spectrum on r. It is convenient to introduce the following notation.

If E_, EO and E+ are subspaces of X_, Xo and X+' respectively,

then, by definition,

Page 339: Constructive Methods of Wiener-Hopf Factorization

330 Bart, Gohberg and Kaashoek

THEOREM 3.Z. Suppose 0_'00'0+ ~~above and

° = °_°0°+· (1) Let HOO,H01,HOZ' ••• ~ the incoming subspaces..£!. 00. Then

the incoming subspaces HO' H1 , HZ' ••• "£!' ° ~ given !L

H m m O,l,Z, •••

(Z) Let KOO,KOl'KOZ'."~ the outgoing subspaces..£!. 00. Then

the outgoing subs paces KO' K1 , KZ ' ••• of 6 ~ given !L

K m

m = O,l,Z, ••••

Note that all these subspaces are well-defined because

6 0 and 6 have no spectrum on r. PROOF. First consider the case ill O. We then need to

show that

(3.6)

x where MO,MO is the pair of spectral subspaces associated with

x 6 0 and M,M is the pair of spectral subspaces associated with

6. Write MO ImP O' M = ImP, where Po is the Riesz projection

corresponding to the part of cr(A O) lying in n; and P is the

Riesz projection corresponding to the part of cr(A) lying

. ,,+ S· (A) + d (A) - h .. h 1n "r. 1nce cr c nr an cr + c nr , t e proJect10n P as

the form

Page 340: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence

(3.7) P

I

o o

331

The fact that p2 = P implies certain relationships between the

operators appearing in (3.7) and these can be used to show that

I PI P ,]

I 0

~] I P2 P2

P 0 I :3 •

0 Po . 0 I P3

0 0 0 0 0 0 I

while, moreover, the first and last factor in the right hand

side of this identity are each others inverse. From this we see

that

(3.8) M

I

o o

In the same way one proves that

(3.9)

Combining (3.8) and (3.9), one immediately gets (3.6).

We have proved the desired result now for m o. For

arbitrary m, it is obtained by an induction argument based on

the special form of the operator A,B and C and the identities

(3.6).0

COROLLARY 3.3. Suppose 0_'00 an~ 0+ are ~ above and

0=0 000+. For j = 1,2, let M,M; ~ the~.£!. spectral

subspaces associated with O. and r. Then - J --

Page 341: Constructive Methods of Wiener-Hopf Factorization

332 Bart, Gohberg and Kaashoek

(3.10)

(3.11)

and .!!!.. that ~ the incoming (outgoing) indices ~ eO and e

are the same.

PROOF. The equivalence of (3.10) and (3.11) is an

immediate consequence of (3.6). If (3.10) or (3.11) is

satisfied, then there are incoming indices and outgoing indices

associated with eO and e~ From formula (1.6) in Ch.II of [BGK4]

formula (1.6) in Ch.III of [BGK4] and Theorem 3.2 above, we see

that the incoming (outgoing) indices of 90 and 9 are the same. 0

If (3.10) or (3.11) is satisfied, there exist incoming

(outgoing) bases for 90 and 9. Given an incoming (outgoing)

basis for 90 one can easily construct an incoming (outgoing)

basis for 9. Just map the basis for eO' which consists of

vectors in XO' into X by the natural injection of Xo into

X = X _ e Xo e X + •

We are now ready to prove Theorem 3.1.

PROOF OF THEOREM 3.1. First suppose that W_ and W+ can

be written as the transfer function of a minus node

0_ = (A_,B_,C_;X_,Y) and a plus node 0+ = (A+,B+,C+;X+,Y),

respectively. Let 0 D = (AD,BD,CD;XD,Y) be the realization of D

constructed in Section 2. Then 9D is a simple node with

centralized singularities and (2.2) - (2.6) hold. Put

9 = 0_e De+ and write e = (A,B,C;X,Y). Then A, B, C and AX are

given by (3.2) - (3.5) with AO' BO' Co and Xo replaced by AD'

BD, CD and XD' respectively. From this it is clear that ° is a

node with centralized singularities. For ~ € r, the transfer

function of e has the value W_(~)D(~)W+(~) W(~). So on r the

transfer function of e coincides with W. Let M,M x be the pair

of spectral subspaces associated with 9 and r. From (2.5),

Page 342: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence

(2.6) and Corollary 3.3 we see that'dim(MnM x ) < m and

dim(X/M+M x ) < m. Also the assertions (d) and (e) hold true

because of (2.2), (2.3), (2.4) and Theorem 3.2.

Next, assume that there exists an invertible bounded

333

- -1 linear operator S on Y such that the functions W_(~) = S W_(~)

and w+(~) = W+(~)S can be written as the transfer function of a

minus node 0 and a plus node 0+, respectively. Consider the

factorization

(3.12) ~ € r,

- -1 where W(~) = S W(~)S. This is a Wiener-Hopf factorization of

the type considered in the first paragraph of this proof. Put

o 0_9D0+, where aD is again as in Section 2. As we saw above

o is a realization (on r) of W with the desired properties.

Write 0 = (A,B,C;X,Y) and define a = (A,B,C;X,Y) by

A A, B - -1 BS , C sc.

Then AX = AX and one easily verifies that (a) - (e) are

satisfied. Actually, the incoming (outgoing) subspaces for a and the incoming (outgoing) subspaces for 0 are exactly the

same.

It remains to prove that an operator S with the

properties mentioned above can always be found. For this, we

argue as follows. Recall that m belongs to n;. For S we take

the (invertible) operator W_(m). Define W_, W+ and W as in the

second paragraph of the proof. Then W_(=) Iy and we have the

Wiener-Hopf factorization (3.12). Clearly

~ € r.

Inspecting both sides of this identity and using the specific

properties of W_,w+, D and the fact that W is analytic on a

neighbourhood of r, we see that W_ has an analytic extension,

again denoted by W_, to a neighbourhood U of the

Page 343: Constructive Methods of Wiener-Hopf Factorization

334 Bart, Gohberg and Kaashoek

closure Qr U r of nr. By taking U sufficiently small, we may

assume that W_ takes only invertible values on U. But

then W can be written (on U) as the transfer function of a

minus node (cf. [BGKI], Chapter II, especially Theorem 2.5 and

Corollary 2.7). Writing (3.12) in the form

>.. € r,

we see that W+ has an analytic extension, again denoted by W+, + +

to a neighbourhood V of the closure nr u r of nr and V can be

chosen in such a way that W+ takes only invertible values on V.

Since n; is bounded, we may assume that V is bounded and it

follows from the results discussed in [BGKI], Chapter II

that W+ can be written (on V) as the transfer function of a

plus node. This completes the proof. 0

4. Dilation

Suppose 9 0 = (AO.BO'CO;XO'Y) is a node and

9 = (A,B,C;X,Y) is a dilation of 90 • This means that the state

space X of 9 admits a topological decomposition

X = Xl $ Xo $ X2 relative to which the operators A, Band C

have the form

Al * * A 0 AO * Xl e Xo e X2 + Xl e Xo e X2 '

0 0 A2

B r :0 Y + Xl e Xo e x2 '

lo

C [0 *]

Clearly a(A) c a(A I ) u a(AO) u a(A 2 ). In general this inclusion

Page 344: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence 335

may be strict. but in the finite dimensional case (to which we

shall restrict ourselves later) it is not. On the complement of

o(A 1) u O(AO) u o(A Z)' the transfer functions of 90 and ° coincide.

From now on we shall assume that 00 has no spectrum on

the contour r and. in addition. that Al and AZ have no spectrum

on r. Then A has no spectrum on r and the same is true for AX

because AX has the form

* *

So the conditions imposed on 00 and A1.A Z imply that 9 has no

spectrum on r too. In the finite dimensional case the converse

is also true.

THEOREM 4.1. Let 00 = (AO.BO.COjXO'Y) and its dilation

° = (A.B.CjX.Y) ~~ above. Then there exists an invertible

bounded linear operator S: X + X such that the following holds:

(1) Let HOO.HOI.HOZ •••• ~ the ,incoming subspaces ~ 00' Then

the incoming subspaces HO.H 1 .H2 •••• ~ ° ~ given.21..

H m m = 0.1.2 ••••

(2) Let KOO.K01.K02 •••• ~ the outgoing subspaces ~ 00' Then

the outgoing subspaces KO.K 1 .K 2 •••• of ° are given.21..

K m

S ~ 0) 1

Om

(0) J

m 0.1.2 •••••

Here we use the same notation as in Theorem 3.Z. The

incoming and outgoing subspaces mentioned in the theorem all

Page 345: Constructive Methods of Wiener-Hopf Factorization

336 Bart, Gahberg and Kaashaek

exist because 9 0 and 9 have no spectrum on r. x x

PROOF. Let MO,MO and M,M be the pairs of spectral

subs paces associated with 9 0 and 9, respectively. Write x x x x x

MO = ImP O' MO = KerP O' M = ImP and M = KerP , where PO,P O'

P and pX are the appropriate Riesz·projections. With respect to

the decomposition X = Xl • Xo • X2 , the projections P and pX

have the form

(4.1) r PI Rl R2

P = l 0 Po R3

0 0 P2

PI RX 1 RX

2 (4.2) pX 0 pX

0 RX

3 0 0 P2

This is clear from the matrix representations of A and AX. The

fact that P and pX are projections implies certain

relationships between the operators appearing in (4.1) and

(4.2). Taking advantage of these relationships one can check

that with

8 1 (I-P 1 )R 1P O -x x

P 1R1(I-P O)'

8 2 (I-P 1 )R2 P 2 - P 1R;(I-P 2 ),

8 3 (I-P O)R 3 P 2 -x x

POR3(I-P 2),

the following identity holds

(p 1

o

o

I

o

o

x x -R1(I-PO)PO-R 1

I

o

Rl(I-PO)R3P2-R2

-R3

I

Page 346: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence

I

o o

From this we see that

(4.3) M

I

o o o

o 1

~ ,I

I

By symmetry (replace P by I_p x and pX by I-P), one also has

(4.4)

I

o o

KerP l

MX o

KerP 2

Combining (4.3) and (4.4), one gets

I Sl S2 Xl Sl (0 )

337

[ : S2 1 (4.5) M+M x 0

x , MnM x x

I S3 MO+MO I S3 MOnMO

I J 0 0

For S we now

S

I

o o

I X2 0 (0)

take the invertible operator

Then the desired holds true for m = O. For arbitrary m, it is

obtained by a straightforward argument based on the special

form of the operators A, Band C and the identities (4.5). 0

x x As in the proof of Theorem 4.1, let MO,MO and M,M

be the pairs of spectral subs paces associated with 00 and 0,

respectively. Then clearly dim(MOnM;) < 00, dim(Xo/Mo+M;) < 00

if and only if dim(MnM x ) < 00, dim(X/M+M x) < 00, and in that case

the incoming (outgoing) indices of 0 0 and 0 are the same (cf.

Page 347: Constructive Methods of Wiener-Hopf Factorization

338 Bart, Gohberg and Kaashoek

Corollary 3.3). This result can also be explained on the level

of incoming (outgoing) bases. Given an incoming (outgoing)

basis for 9 0 one can construct an incoming (outgoing) basis

for 9 and vice versa. The construction uses the operator S

introduced in the proof of Theorem 4.1. We omit the details.

Now let us restrict ourselves to the finite

dimensional case. Let W be a rational mxm matrix function

having the value 1m at m. Suppose W has neither poles nor zeros

on the contour r. Then W can be written (in several ways) as

the transfer function of a finite dimensional node 9 without

spectrum on r. Associated with such a node are incoming and

outgoing spaces. Theorem 1.1 implies the validity of formulas

(1.5) and (1.6). In other words, the dimensions of the outgoing

spaces and the codimension of the incoming spaces are "spectral

characteristics" of the transfer function W: they do not depend

on the choice of the realization 9 for W. In the finite

dimensional case considered here, the proof of this fact can be

given as follows.

Let 9 1 and 9 2 be finite dimensional nodes without

spectrum on the contour r, both having the rational mxm

matrix function as their transfer function. Then 9 1 is the

dilation of a minimal node 9 10 and 9 2 is the dilation of a

minimal node 9 20 • The transfer functions of 9 10 and 9 20 coincide on r and hence they coincide (as rational functions)

on the wh~le Riemann sphere. But then 9 10 and 9 20 are similar

by the well-known state space isomorphism theorem. So we may

assume that 9 1 and 9 2 are dilations of one single minimal

node 9 0 • The desired result is now immediate from Theorem 4.1.

5. Spectral characteristics of transfer functions:

outgoing spaces

In this section we investigate the situation where the

transfer functions of (possibly infinite dimensional) nodes

coincide on the given (positively oriented) contour r in the

(finite) complex plane.

Page 348: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence 339

THEOREM 5.1. Let 01 = (A 1 ,B 1 ,C 1 ;X 1 ,Y) and

0 Z = (AZ,BZ'CZ;X Z' Y) ~nodeswithout spectrum..£.!!.. the contour

r. Suppose that the transfer functions ~ 01 and 0 Z coincide

on r. Then the bounded linear operator 'l': Xl + XZ' given!L

( 5 • 1)

maps the outgoing subspaces ~ 9 1 one-to-one onto the

corresponding outgoing subspaces ~ OZ. ~ other words, for x m-1

m = O,l,Z, ••• , the operator 'l' maps MinMInKerC I n ••• n KerC 1A1 x m-l x

one-to-one onto MZnMZnKerC Z n ••• n KerCZA Z • Here M1 ,M1and x

MZ ,MZ ~ the pairs ~ spectral subspaces associated

with r and 0 1 ,0 Z' respectively.

It is interesting to note that when 01 = 0 Z' the

operator 'l' acts on MI n M; as the identity operator.

PROOF. The proof requires several steps. We begin by x x

showing that 'l' maps M1 n Ml into MZ n MZ•

M~ J

Here

then

j

x Take x € MI n MI. Recall that Mj =

Kerp;, where the Riesz projections P j

1 f -1 px 1 P. Zd

(A-A.) dA, Zd J r J J

I ,2. So x E MI n Ml( I

means that x =

'I'x

ImP j and

and p~ are given by J

f x -1 (A-A.) dA.

r J

PIx x

(I-PI)X. But

x x -1 Consider the latter integral. The functions PZ(A-AZ) and

-1 (A-AI) PI both have an analytic extension to a neighbourhood

of n; u r vanishing at ~. Hence the integral vanishes. So

'I'x

Page 349: Constructive Methods of Wiener-Hopf Factorization

340 Bart, Gohberg and Kaashoek

x x x which implies that ~x € Im(I-P ) = Ker Pz = MZ•

Z -1 For A € r, let W(A) = I + C1 (A-A 1 ) B1 • By hypothesis

-1 W(A) = I + CZ(A-A Z) BZ too. Moreover, W takes invertible

values on r, so we can write

x Using that BZC Z = AZ-A Z and B1C1

that

(A-A;) -1 BZW ( A)

W(A)C 1 (A-A~)-1

x A1-A 1 , one easily verifies

A € r,

A € r.

It follows that ~ can also be written as

~

But then

~x

The latter integral again vanishes. Indeed, the functions -1 x -1 x

(I-PZ)(A-A Z) and (A-A 1 ) (I-PI) have analytic extensions to a

neighbourhood of n; u r, and so we can apply Cauchy's theorem.

Thus

~x

and in particular ~x € 1m P 2 x

~x € M2 n MZ•

M2 • We conclude that

From now on we shall consider ~ as a bounded linear

operator from Ml n M~ into MZ n M;. In an analogous way the

expression

Page 350: Constructive Methods of Wiener-Hopf Factorization

wiener-Hopi equivalence

( 5 • 2)

defines a bounded linear operator from M2 n M; into Ml n M~. We shall prove that ~ and ~ are each others inverse.

Take A,~ E r. If A * ~ one can use the resolvent

equation to show that

-1 -1 1 -1 -1 C1(A-A 1) (~-Al) B1 A_~[C1(~-A1) B1- C1(A-A 1) Bll

1 -1 -1 = A_~[W(~) - W(A)l C2 (A-A 2 ) (~-A2) B2 •

341

By continuity, it follows that the first and last term in these x

identities are also equal for A = ~. Take y E MZ n MZ and put x

x = ~y. Then x E M1 n M1 • Combining (5.1), (5.2) and the

identity established above, we get

1/I~y = 1 2 f (Z1TJ f

r r 1 2 f = (21fi) f

r r

1 2 f (21fi) f r r

y.

So ~~ is the identity operator on MZ is the identity operator on M1 n M~.

It is clear now that ~ maps

n M; and, by symmetry, ~~

x Ml n M1 one-to-one onto

x MZ n MZ• Our next step is to show that ~ maps

Page 351: Constructive Methods of Wiener-Hopf Factorization

342

(5.3)

Take x e:

Bart, Gohberg and Kaashoek

Ml x

n MI n Ker C1 • Then

1 f -1 x -1

C2'l'X = 2'11i C2{;\-A2) B2C1{;\-A1) xd). r

1 f [W().)-I]C 1().-A;)-l xd ). 2'11i r

1 f x -1 2'11i

W().)C 1().-A 1) xd).-r

1 f -1 2'11i

C1{;\-A 1) xd). r

x = C1P l x - C1P 1x

C1x = o.

1 - 2ni

1 x -1 2d f C 1 (;\-A 1) xd).

r

f x -1

C 1 (;\-A 1) xd). r

Thus 'l'x e: Ker C2 • Since 'l'x e: M2 n M; too, we have x

'l'x e: M2 n M2 n Ker C2 •

In order to prove (5.3), we argue as follows. From x x x

CIx = 0 and x e: MI n MI it is clear that Alx = Alx e: MI n MI.

Further

1 f -1 x -1 x

2'11i ().-A 2 ) B2CI ().-A I ) (AI-).+).)xd). r

1 f -1 x -1 1 f -1

2 'IIi ).(;\-A 2) B2C1{;\-A I ) xd). - 2ni (;\-A 2) B2C1xd). r r

1 f -1 x -1 2'11i ).().-A 2) B2C1().-A 1) xd).

r 1 f ().-A 2+A 2)().-A -1 x -1

2'11i 2) B2CI0-Al) xd). r

1 f

x -1 1 f -1 x -1 2'11i B2C1{;\-A 1) xd). + 2'11i A2().-A 2 ) B2C1().-A 1) xd).

r r

Page 352: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence

x Here we used that PIx = O.

For j = 1,Z, let KjO,Kjl,KjZ"" be the outgoing

subspaces of 6 j • Thus

K. n .,. n Ker Jm

We proved already that w maps KIO one-to-one onto KZO and Kll

into KZ1 ' Assume that for some non-negative integer n the

inclusion W[K1nl C KZn is correct, and take x E K1 ,n+l' Then x

x E KIl , Since A1Kll C KIO = Ml n Ml , we also have Alx E KIn'

Thus WX E KZI and AZwx = WAlx E w[Klnl. According to the

induction hypothesis w[K1nl C KZn ' so AZwx E KZn ' But then

WX E KZ,n+1' We conclude that w[KIml C KZm for all m.

Analogously ~[KZml C KIm for all m. Here ~ is the operator

given by (S.Z). Since ~ and Ware each others inverse, it

follows that W[Klml = KZm ' and the proof is complete.D

6. Spectral characteristics of transfer functions:

incoming spaces

343

In this section we continue the investigation started

in Section S.

THEOREM 6.1. Let 01 = (A l ,B 1 ,C 1 ;X 1 ,Y) and

6 Z (AZ,BZ'CZ;XZ'Y) ~ nodes without spectrum ~ the contour

r. Suppose that the transfer functions ~ 61 and 6Z coincide x on r. Let the Riesz projections Pl,P l and the bounded linear

operator ~ : Xl + Xz be given ~

f -1 x 1 f -1 PI Z1Ii (A-AI) dA, PI Z1Ii (A-AI) dA,

r r

f x -1 -1 x ~ Z1Ii (A-A Z) BZCl(A-A l ) (I-PI )PldA.

r

Page 353: Constructive Methods of Wiener-Hopf Factorization

344 Bart, Gohberg and Kaashoek

0,1,2, ••• ,

x x Here HI,M I and HZ ,H 2 ~~ the pairs ~ ~tral subsP~E~

':!~_~.?_0_~~ wit h ran d 0 I ' O2 , !:!!~~E.-0..~.

Note that ~ = .(1-PI)P~, where. is as in Theorem 5.1.

This is clear from formula (5.1). By Cauchy's theorem ~ can

also be written as

( 6 • 1 )

PROOF. We begin by showing that ~ maps Him into H2m •

Here HjO,Hjl,Hj2"" are the incoming subspaces of 0 j , i • e •

H. M.+M~+1m B.+ ••• + 1m A~-lB. Jm J J J J J

x By Pz and Pz we denote the Riesz projections

Recall that Mj = 1mP j and M; = 1m P; (j=I,Z).

Let m be a non-negative integer and take x € Him'

Wrlte x in the form

x Then PIX

x x x x m-l PIP I u + PI B I Y 0 + ••• + PI (A I ) B 1 Y m- l' For A € r,

Page 354: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence

put

Clearly f is analytic on a neighbourhood of n+ u rand r

m-l 1 f k x x -1 ) + r (---2' A P2(A-A 2) B2YkdA

k=O 111. r

4>x + m-1 x x k

r P2 (A 2) B 2Yk. k=O

So in order to prove that 4>x € H2m , it suffices to show that

(6.2)

As we shall see, this requires a considerable effort.

First of all,

m-1 I: (_1_

k=O 2111

1 + 2111 f

r

k x x -1 ) f A PIO-AI) BIykdA + r

x x -1 -1 x P 1(A-A 1) B1C1(A-A 1) (I- P l)P 1P 1udA +

345

Page 355: Constructive Methods of Wiener-Hopf Factorization

346

where Z E 1m PI MI.

Bart, Gohberg and Kaashoek

x P1z,

Secondly, we introduce another auxiliary function. For

;I. E r, put

Here w(;l.)

-1 -1 W(;I.)g(;I.) - 2~i f C1(;I.-A 1) (~-Al) Blg(~)d~ =

r 1 x -1 x -1

f(;I.) + 2~i f W(;I.)C 1(;I.-A 1) (~-Al) Blf(~)d~ + r

x -1 x + W(;I.)C1(;I.-A l ) (I-P1)z +

1 -1 -1 -1 - ~i {C l (;I.-A 1) (~-Al) BIW(~) f(~)d~ +

Page 356: Constructive Methods of Wiener-Hopf Factorization

wiener-Hopi equivalence 347

-1 x x f(h) - C1(h-A 1) (P 1z-P 1P l z-z+P l z) +

-1 x -1 - 2ni f C1(h-A l ) P l (u-A 1 ) B1f(u)du +

r

As a third step in the proof of (6.2), we show that

( 6 • 3)

Page 357: Constructive Methods of Wiener-Hopf Factorization

348 Bart, Gohberg and Kaashoek

From the resolvent equation it is clear that

(cf. the proof of Theorem 5.1), and hence

gO)

It follows that

The latter integral is zero by Cauchy's theorem. So

f (I- P2)().-A;)-lB 2f().)d). + r

which vanishes again by Cauchy's theorem.

).,).1 e: r

We are now ready to prove (6.2). Recall from the proof

of Theorem 5.1 that

).,).1 e: r

Page 358: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence

(cf. the identity (6.4». Using this and the expression

for f(A) in terms of g(A) established above, we obtain

1 2 - (21Ti) I I

r r

I P;(A-A;)-l B2W (A)g(A)dA + r

x x -1 -1-1 P2(A-A 2) B2C1(A-A 1) (~-Al) Blg(~)d~dA

1 I x x-l 21Ti r P2 (A-A 2 ) B2W(A)g(A)dA +

1 2 x x -1 -1-1 -(~) II P2(A-A 2 ) B2C2(A-A 2 ) (~-A2) B2g(~)d~dA

1T1 rr

1 x -1 21Ti ~ P2 (A-A 2 ) B2g(A)d A +

1 2 x -1 x -1 -1 - (21Ti) ~ ~ P2 [(A-A 2) -(A-A 2 ) ](~-A2) B2g(~)dAd~

Hence, by formula (6.3),

The desired result (6.2) is now clear from the fact that x x x

1m P2 P2c 1m P2 + Ker Pz = MZ + M2 •

We have proved that ~ maps Him into H2m • In other -1

words Him c ~ [HZml. In order to establish the reverse

inclusion, we introduce the operator

349

Page 359: Constructive Methods of Wiener-Hopf Factorization

350 Bart, Gohberg and Kaashoek

Note that W is obtained from ~ by interchanging the roles -1

of 6 1 and 6 Z• So, by symmetry, we have HZm C W [HIm]. Assume

now that

(6.5)

-1 Then we can argue as follows. Take x € ~ [H Zm ]. Then

-1 ~x € HZm C W [HIm]' and so W~x € HIm. We conclude that

x x = x - W~x + W~x € Ml + Ml + HIm = HZm •

Next we prove (6.5). Take x € Xl. From (6.1) it is x

clear that ~x € 1m P Z• Thus

Observe that

~x

and hence

As we shall see presently,

-1 -1 C1 (A-A 1 ) (I-Pl)(~-Al) Bl

(6.6) -1 -1 CZ(A-A Z) (I-PZ)(~-AZ) BZ A,~ € r.

It follows that W~x must be equal to

Page 360: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence 351

So x - ~~x € 1m PI + Im(l-p~) = Ml + M~. To see that (6.6) is true, recall that the transfer

functions of 01 and 02 coincide a r. By using the resolvent

equation a number of times, one gets

A,Il,U € r.

Formula (6.6) appears by integrating this expression over r

(integration variable u).

The first statement in Theorem 5.1, namely that -1

H1m = ~ [H2ml, has now been proved. So we turn to the second

assertion holding that X2 = 1m ~ + H2m • Since H20 ,H 21 , ••• is an

increasing sequence, it suffices to show that x

X2 = 1m ~ + M2 + M2 • But this is true indeed, because

(6.7)

To see that (6.7) holds, consider (6.5) and interchange the roles

of 01 and 02' This proves the theorem.D

We are now ready to prove the only if part of Theorem

1 .2.

PROOF of THEOREM 1.2 (only if part). Assume W admits a

(right) Wiener-Hopf factorization with respect to the given

contour r. Then, by Theorem 3.1, there exists a node 0 without

spectrum on r such that the transfer function of 0 coincides

on r with W, while in addition

(6.8) ~ ~x

codim(M+M ) < "'.

~ ~x

Here M,M is the pair of spectral subspaces associated

with e and r. From Theorem 5.1 it is clear that

Page 361: Constructive Methods of Wiener-Hopf Factorization

352 Bart, Gohberg and Kaashoek

dim(MnM X) = dim(MnM x ) and codim(M+M x) = codim(M+Mx ). Thus (6.8)

holds with M and MX replaced by M and MX, respectively.O

For the case dealt with in this paper, where r ia a

positively oriented contour in the finite complex plane, Theorem

1.2 is the same as Theorem 2.2 in Section 1.2 of [BGK4]. As was

mentioned in Section 1, the situation considered in [BGK4] is

slightly more general. However, it can be easily reduced to the

case treated here. Indeed, let W, 9, r, M and MX be as in Theorem

2.2 in Section 1.2 of [BGK4]. A small change in the contour r does not affect the fact that 9 has no spectrum on r. Neither

does it change the incoming and outgoing subspaces of 9. So we

may assume that ~ ~ r, i.e., that r is a contour in the finite + complex plane ~. But then either the inner domain Or or the outer

- + domain Or of r is bounde~. The case when Or is covered by Theorem

1.2. For the case when Or is bounded, the desired follows by

changing the orientation of the contour r and applying Theorem

1.2 to'the node 9 x = (Ax,B,-C;X,Y).

7. Spectral characteristics and Wiener-Hopf equivalence

The aim of this section is to prove Theorem 1.1, the

main result of this paper.

PROOF OF THEOREM 1.1. There is no loss of generality in

assuming that the functions W_ and W+ appearing in the Wiener­

Hopf equivalence (1.1) can be written as the transfer function of

a minus node 9 and a plus node 9+, respectively. This can be

seen with an argument analogous to that used in the last part of

the proof of Theorem 3.1.

Write 9_ = (A_,B_,C_;X_,Y), 9+ = (A+,B+,C+;X+,Y) and

define 9 = (A,B,C;X,Y) as the product of 9_, 9 2 and 9+, i.e.,

9 = 9_9 29+. Then A,B,C and AX are given by (3.2) - (3.5) with

AO,BO'C O and Xo replaced by A2 ,B 2 ,C 2 and X2 ' respectively.

Clearly, 9 is a node without spectrum on r and, for A € r, the

transfer function of 9 has the value W_(A)W 2 (A)W+(A) = W1 (A).

So on r the transfer functions of 9 1 and 9 coincide. This enables

Page 362: Constructive Methods of Wiener-Hopf Factorization

Wiener-Hopi equivalence 353

us to apply Theorems 5.1 and 6.1.

Let ~a: Xl + X and ~a: Xl + X be given by

~a 1 f (~-Ax)-lBCl(~-Al)-ld~, 21fi r 1 f x -1 -1 x

~a = 21fi (~-A) BCl(~-Al) (I-P1)P1d~. r

Then 1/1 a and ta are bounded linear operators and, for

m = a,1,2, ••• ,

Ker~a n KIm = (a),

Here the K1m (Km) are the outgoing spaces of 01(O) and the H1m (Hm)

are the incoming spaces of 01(O). From Theorem 3.2 we know that

K m

(a)

H m

where the K2m and H2m are the outgoing and incoming spaces of

02' respectively. So, if J is the canonical projection of X onto

X 2 , i.e.,

J = [0 I

(J • ~O)[K1m] = K2m ,

HIm = (J • t o)-1[H 2m ],

Ke r ( J • ~ a) n KIm = (0),

An easy computation, based on the special form of the operators

AX and B, shows that J • ~O = ~ and J • ta = t, with ~ and t

as in the theorem, and the proof is complete.D

Page 363: Constructive Methods of Wiener-Hopf Factorization

354 Bart, Gahberg and Kaashaek

The operators ~ and ~ appearing in Theorem 1.1 can also

be written as

where W+ is as in (1.1). The proof is based on (7.1) and the x -1

identities (A-A Z) BZWZ(A) = x -1

-1 -1 -1 (A-A Z) BZ and W1 (A) C t (A-A 1)

C1 (A-A 1 ) (cf. the proof of Theorem 5.1).

[ BGK 1 ]

[BGKZ]

[BGK3]

[BGK4]

REFERENCES

Bart, H., Gohberg, I., Kaashoek, M.A.: Minimal

factorization of matrix and operator functions. Operator

Theory: Advances and Applications, Vol.l. Basel-Boston­

Stuttgart, Birkhauser Verlag, 1979.

Bart, H., Gohberg, 1., Kaashoek, M.A.: Wiener-Hopf

factorization ~ analytic operator functions and

realization. Rapport Z31, Wiskundig Seminarium, Vrije

Universiteit, Amsterdam, 1983.

Bart, H., Gohberg, I., Kaashoek, M. A.: Wiener-Hopf

factorization and realization. In: Mathematical Theory

of Networks and Systems, Proceedings of the MTNS-83

International Symposium, Beer Sheva, Israel, Lecture

Notes in Control and Information Sciences, no. 58 (Ed.

P. Fuhrmann), Springer Verlag, Berlin, 1984, pp. 4Z-62-

Bart, H., Gohberg, I., Kaashoek, M.A.: Explicit Wiener­

Hopf factorization and realization. This volume.

[CG] Clancey, K., Gohberg, 1.: Factorization ~matrix

[GKS]

functions and singular integral operators. Operator

Theory: Advances and Applications, Vol.3, Basel-Boston­

Stuttgart, Birkhauser Verlag, 1981.

Gohberg, 1., Kaashoek, M.A., Van Schagen, F.: Similarity

Page 364: Constructive Methods of Wiener-Hopf Factorization

~iener-Hopf equivalence 3.55

.£.!.. operator blocks and canonical forms. II. Infinite

dimensional ~ and Wiener-Hopf factorization. In:

Topics in Modern Operator Theory. Operator Theory:

Advances and Applications, Vol.2. Basel-Boston­

Stuttgart, Birkhauser Verlag, 1981, pp. 121-170.

[H] Helton, J.W.: Systems with infinite-dimensional state

space: the Hilbert space approach. Proc. IEEE 64 (1),

145-160 (1976).

H. Bart I. Gohberg

Econometrisch Instituut

Erasmus Universiteit

Postbus 1738

3000Dr Rotterdam

The Netherlands

M.A. Kaashoek

Subfaculteit Wiskunde en

Informatica

Vrije Universiteit

Postbus 7161

1007 Me Amsterdam

The Netherlands

Dept of Mathematical Sciences

The Raymond and Beverly Sackler

Faculty of Exact Sciences

Tel-Aviv University

Ramat-Aviv Israel

Page 365: Constructive Methods of Wiener-Hopf Factorization

Operator Theory: Advances and Applications, Vol. 21 ©1986 Birkhauser Verlag Basel

MULTIPLICATION BY DIAGONALS AND REDUCTION

TO CANONICAL FACTORIZATION

H. Bart, 1. Gohberg, M. A. Kaashoek

The calculus of matrix functions in realized form is

357

developed further. Special attention is paid to the operation

of multiplications by diagonal matrix functions. The analysis

yields a reduction of matrix functions to functions that admit

canonical factorization.

1. INTRODUCT ION.

Let W be an mxm rational matrix function without poles

or zeros on a given contour r and having the value 1m at 00. In

this paper we analyse the effect on realizations of W when W is

multiplied on the left by a (Wiener-Hopf) diagonal, i.e., a

matrix function of the form

( 1. 1 ) DO.)

where €1 is in the inner domain n; of rand €2 is in the outer

domain n; of r.

The main result is Theorem 3.1. It provides a

procedure for choosing the exponents u 1 , ••• ,um in (1.1) in such

a way that the function D(A)W(A) admits a canonical Wiener-HopE

factorization. As a corollary, a theorem originally proved by

loS. Chibotaru (see [Ch]; cf. also leG], Section V1.2) is

obtained. The main result in Section 2 is a general theorem

Page 366: Constructive Methods of Wiener-Hopf Factorization

358 Bart, Gohberg and Kaashoek

describing the pair of spectral subs paces associated with a

product of two nodes.

Throughout the paper we shall assume that r is a

positively oriented contour in the (finite) complex plane + ~ (and hence Qr is bounded). From the last paragraph of Section

6 in [BGK41, one sees that this assumption does not affect the

generality of the results.

We shall use the notation and terminology of [BGK31

and [BGK4]. For the convenience of the reader we recall a few

of the main definitions. A node (or a system) is a quintet

e = (A,B,C;X,Y), where A: X + X, B: Y + X and C: X + Yare

(bounded linear) operators acting between the complex Banach

spaces X and Y. In other words, A € L(X), B € L(Y,x)

and C € L(X,Y). The node e is called finite dimensional if both

X and Yare finite dimensional. The space X is called the state

space of e. The transfer function of e is the function

If a is finite dimensional, We is (or may be identified

with) a rational matrix function having the value I at ~.

Conversely, every rational (square) matrix funtion having the

value I at ~ can be written as the transfer function of a

finite dimensional node. We say that e has ..!!2.. spectrum ~ the

contour r if A has no spectrum on r and the same holds true for

the operator A-BC, which henceforth will be denoted by AX. In

that case one can introduce the spectral subs paces

M = Im( __ I __ . f (A-A)-l dA ), 2111. r

1 x -1 Ker(z-y f (A-A) dA).

11" r

The pair M,M x is called the ~~ spectral subspaces

associated .with a and r.

An earlier version of this paper appeared in [BGK21.

Page 367: Constructive Methods of Wiener-Hopf Factorization

Multiplication by diagonals 359

2. Spectral pairs associated with products of nodes

The following theorem concerns the pair of spectral

subspaces associated with a product of nodes (cf. [BGK1j).

Similar results hold for the incoming and outgoing subspaces as

introduced in [BGK3j. A special case of the theorem

(multiplication by minus or plus nodes) was already obtained in

Section 3 of [BGK4j.

THEOREM 2.1. Suppose 01 = (A 1 ,B 1 ,C 1 ;X 1 ,Y) and

02 = (A 2 ,B 2 ,C 2 ;X 2 ,Y) ~ nodes without spectrum ~ the contour

r. For j = 1,2, let M.,M~ ~ the pair ~ spectral subspaces -- --JJ x

associated with r and OJ. Also, let M,M ~ the ~~ spectral

subspaces associated with r and the product ° = 0102 .£!. 01

and 02. Then

K = [ ]. x

]. I R M1 I 0 M1 ( 2 • 1 ) MX

RX x 0 I M2 I M2

where R: X2 -+- Xl and RX: Xl -+- X2 are the ---- bounded linear operators

~iven .E.L

(2.2) R

( 2.3)

The notation used in (2.1) was introduced in Section 3

of [BGK4j.

PROOF. Recall from [BGK1j, Section 1.1 that

° (A,B,C;X,Y), where X = Xl ~ X2 and

B C

Also AX A-BC has the representation

Page 368: Constructive Methods of Wiener-Hopf Factorization

360 Bart, Gohberg and Kaashoek

From this it is clear that ° has no spectrum on r, and

so M and MX are well-defined.

Write M = 1m P, MX Ker x

P , M1 = 1m P 1 ' x

MI and MX x

where x x

and pX M2 = 1m P2 = Ker P2 ' P,P ,P I ,P I ,P 2 2 2

Ker x

PI' are the

appropriate Riesz projections. Using the integral representation

of these projections, one easily sees that

o P

x where R: X2 + Xl and R : Xl + X2 are given by (2.2) and (2.3),

respectively. Since P and pX are projections, we have x x x x x

R = P I R+RP 2 and R = R P1+P 2R • It follows that

1 1 P

o o

1 01 I_p X

1-:; 1 [:' 0

1 I_p X 1

RX IJ 0 1

The identities ( 2. 1) are now clear from M 1m P, M x Im(I-P x )

and x

Im( I-P~), j = 1,2. D M. = 1m P j' M. J J J

REMARK 2.2. Let 01'02' etc. be as in Theorem 2.1 and - x + suppose in addition that cr(A 1 ) C Qr and cr(A I ) C Qr' Then PI = 0

x and PI ~. So in this case the identities (2.1) simplify to

M [: R

I

( 0 >] MX

2

Page 369: Constructive Methods of Wiener-Hopf Factorization

Multiplication by diagonals 361

This we shall use in the next section where for 9 1 we take a

minimal node whose transfer function has the diagonal form (1.1)

with all exponents u1 , ••• ,um non-negative.

3. Multiplication by diagonals

In this section we restrict ourselves to the finite

dimensional case and we study the effect of multiplication (on

the left) by an m x m diagonal matrix D of the form (1.1), i.e.,

(3.1 ) Do..)

+ Here £1 lies in the (bounded) inner domain Qr of the given

contour rand £2 lies in the outer domain Q; of r. m For k = 1, ••• ,m,let ilk: ~ + ~ be the canonical

projection of ~m onto the k-th coordinate space. In other words

ilk assigns to each vector in ~m its k-th coordinate. The next

theorem is the main result of this chapter. For the definition of

outgoing bases, see [BGK3]. m THEOREH 3.1. Let e = (A,B,C;X,~ ) ~~ finite

dimensional node without spectrum ~ the contour r, let M,M x

~ the pai&.£!. spectral subspaces associated with rand e, and

let {ejk}k=Lj~I~~outgOing basis for e with respect.!,£. the

operator A-£2' Then the restriction.£!. A-£2 to M..!.!.~ invertible

operator ~ M and the complex t x m matrix

( 3.2)

has rank t. Suppose sI, ••• ,St are integers among I, ••• ,m such

that

Page 370: Constructive Methods of Wiener-Hopf Factorization

362 Bart, Gohberg and Kaashoek

(3.3) ( -1)p det II C(A-E 2 ) e01 0 k=1 "* 0, sk J J,

p l, ... ,t.

Further, assume that the exponents u 1 ••••• um in the diagonal

entries ~ D. given.EL (3.1). are non-negative and satisfy

(3.4) k l, ... ,t.

~ ~X

Then. ~ M.M ~ the ~~ spectral subspaces associated with

r and the product node 8 = a a where aD ~~ arbitrary finite D •

dimensional realization of D without spectrum on r. the following

identities hold:

(3.5)

(3.6)

M n MX = (0),

t dim(X/M+M x ) = dim(X/M+M x ) + L (u -uk) +

k=1 sk L

j = 1 ••••• m

j"*s1····· S t

~ particular dim(X/M+M x ) ~ dim(X/M+M x ). equality occurring if

and~~

k l, ... ,t,

(3.7) j

Here X ~ the state space ~ the node a. The fact that the matrix (3.2) has rank t implies that

there do exist integers s1 ••••• St among 1 ••••• m such that (3.3)

is satisfied.

PROOF. The spectrum of the restriction AM of A to M lies

in the inner domain n; of r. Since E2 € n;. it follows that

AM-€2 is invertible. Suppose a 1 ••••• a t are complex numbers such

that

t -1 Lao C ( AM - E 2) e 0 1 ° .

j=1 J J

Put u 0. where

Page 371: Constructive Methods of Wiener-Hopf Factorization

Multiplication by diagonals

-1 x v = (AM-€2) u € M. Observe that (A -(2)v

(A-€2)v = u. So

v =

(A-BC-€ )v 2

363

Let pX be the Riesz projection corresponding to AX and r. Then

x -1 x

f x -1

f -':.--d A f (A-A L_~dA. p v 21Ti

(A-A) vdA 21Ti

+ 21Ti A- €2 A-€2 r r r

The first integral in this sum vanishes because €2 € nr ; the

second because its integrand is analytic on a neighbourhood of

n; u r too. Thus v E M n MX n Ker C. Now M n MX n Ker C is

spanned by the vectors ejk with k = l, ••• ,aj-l,

j = l, ••• ,t (a. > 2). It follows that u = (A-€2)v is a linear J =

combination of the vectors (A-€2)e jk = ej,k+l' where k and j are

subject to the same restrictions. On the other hand

+ atetl' and we may conclude that

a1 = ••• = at = O. This proves that the vectors -1 -1

C(A M-€2) e 11 , ••• ,C(A M-€2) e tl are linearly independent

in ~m. In other words the matrix (3.2) has rank t.

Next we deal with the second part of the theorem. In

view of the results obtained in Sections 5 and 6 of [BGK4], it

suffices to consider one single realization 8D of D without

spectrum on the contour r. For 8D we now take a special (minimal)

realization of D which is constructed as follows (cf. Section 2

in [BGK4]) • For

A. J

B. J

j

( €

1

0

0

r ~

0

2

1, ••• ,ro, define

o

o

u. ([ + ([ J

o o

Page 372: Constructive Methods of Wiener-Hopf Factorization

364 Bart, Gohberg and Kaashoek

u. Then e. = (A. ,B. ,C.;I[ J ,1[) is a (minimal) realization of the

J J J J u. scalar function «A-e1)/(A-e Z) J). The (minimal) realization

m eO = (AO,BD,CD;XD,I[ ) of D is now obtained by putting

U l (3.8) XD e ([ ...

Al

AZ

AD

Bl

8Z

BD

U

e ([ m

A m

8 m

C , mJ

XD + XD,

(fm + XD,

In other words 00 is the "direct sum" of the nodes 0 1 , ••• ,0m•

Note that eD has no spectrum on r. In fact cr(An) C {€z} C n; x +

and cr(Ao)c {ell C nr • Write G = ° ° o (A,S,C;X,y). Then G is a node without

~ ~x

spectrum on r. According to Remark 2.2 the pair M,M of spectral

subspaces associated withe and r can be expressed as

Page 373: Constructive Methods of Wiener-Hopf Factorization

Multiplication by diagonals 365

( 3.9)

where R: X + XD is defined by

R

With respect to the decomposition (3.8), the operator R has the

form

(3.10) R

R m

Taking into account the definition of Ak and Bk , we get

Observe now that

-1 __ 1 __ . f (A-A) d' (A )-~P 2 A = M-£2 '

Tn r 0-£ ) ~ 2

where P is the Riesz projection corresponding to A and the

contour r, considered here as an operator from X into M = 1m P.

It follows that

Page 374: Constructive Methods of Wiener-Hopf Factorization

366 Bart, Gohberg and Kaashoek

(3.11) l, ..• ,m.

Here C is viewed as an operator from M into ~m. Combining (3.10)

and (3.11) one gets a complete description of R.

Until now we only used that the integers u l , ••• ,um

are non-negative. Next we are going to employ the assumption

(3.4), where sl, ••• ,St are such that (3.3) is satisfied. Since

M and MX are given by (3.9), the statement M n MX = (0) is

equivalent to the assertion M n MX n Ker R (0).

Take x

x e: M n M n Ker R. Then in particular Rx O. In

view of (3.10) and (3.11) this means that

(3.12) 0, II = 1, ••• , uk; k It···,m.

Also x e: M n MX, and so x can be written as a linear combination

of the vectors ejk:

(3.13) x = E Aokeok. k= 1 , ••• , ex ° J J j=l, ••• ,t J

From (3.4) we see that

(3.12) implies

u ""'U are strictly positive. Thus sl St

(3.14) 0, k l, ... ,t.

Observe now that

Together with (3.14) this gives

Page 375: Constructive Methods of Wiener-Hopf Factorization

Multiplication by diagonals

t -1 E A'lIT sk C(AM-E 2 ) eJ'l

j=l J 0, k l, ... ,t.

But then it follows from (3.3) that All = ••• = Atl = 0.

If a l = 1 (and hence a l = ••• = at = 1), we may conclude that

x = 0. If a l ~ 2, we continue as follows.

367

Suppose a l ~ ~ a q ~ 2 ~ a q+1 ~ ~ at. Then (3.13)

can be written as

x = E A'k e 'k. k=2, ••• ,a, J J j=l, ••• ,qJ

From (3.4) we

by (3.12),

see that the u , ••• ,u s are all at least 2. Hence, sl q

0, k 1, ..• ,q.

Using that

we now obtain

0, k 1, ••• ,q.

In view of (3.3) this implies Al2 = ••• = Aq2 = 0. Proceeding in

this way (or, if one prefers, by a formal finite induction

argument) we see that all complex numbers Ajk are zero. Hence

x = ° and we have proved (3.5).

In order to establish (3.6) we choose subspaces Nand

NX of X such that M = (MnM x) $ Nand MX = (MnM x) $ NX. Then

M+M x = N ~ (MnM x ) ~ NX and so

Page 376: Constructive Methods of Wiener-Hopf Factorization

368 Bart, Gohberg and Kaashoek

dim(X!M+M X) dim X - dim NX - dim(MnM x ) - dim N.

Next we observe that

(3.15) F1+F1x = [I R][(O)] ~ [I R] [(0) ] o I N 0 I MnM x

Indeed, from (3.9) it is clear that M+M x is the sum of the four

spaces appearing in the right hand side of (3.15). To see that

the sum is direct, assume w € N, x,y € M n MX,z € NX and

Then w z = 0, x = -y and Rx O. So X € M n MX n Ker R. But

M n MX n Ker R = (0). Hence x y = 0 too.

The identity (3.15) implies

It follows that

m t dim(X!M+M x ) + r ok - r a

k= 1 j= 1 j

t dim(X!M+M x ) + r (0 -ak ) +

k=1 sk

and the proof is complete. [[

j=l, ••. ,m j*sl,···,St

Theorem 3.1 concerns a situation where the (non­

negative) integers 01 ••••• om appearing in (3.1) are chosen in a

special way (cf. (3.4) and (3.7». It is also possible to analyse

other cases. for instance the case when all the Ok are equal to

one single positive integer 0. This comes down to multiplication

Page 377: Constructive Methods of Wiener-Hopf Factorization

Multiplication by diagonals 369

(from the left) by «X-E 1 )/(X-E 2 »U1m • The analysis is similar to

that given in the proof of Theorem 3.1. Further one can replace

left multiplication by multiplication from the right.

We refrain from developing the details here. Neither

shall we discuss the case where all exponents u 1 , ••• ,um are non­

positive. Instead we present a simple application of Theorem 3.1.

Let W be a rational m x m matrix function having no

poles or zeros on the contour r and with the value 1m at 00.

Then W admits a (right) Wiener-Hopf factorization relative to the

contour r (and the points E1 ,E 2 ). In the finite dimensional case

considered here the Wiener-Hopf factorization can be written in

the form

(3.16) WO) X E r, X-E K

(X-/) m 2

where W_ is a minus function, W+ is a plus function and

Kl ~ K2 ~ ••• ~ Km' The integers K1 , ••• ,Km, some of which may be

zero now, are unique. We refer to them again as the factorization

indices of W. For the definition of minus and plus functions, see

Section 3 in [BGK4]. Of course W_ and W+ are (meant to be) mxm

matrix functions.

In the situation of Theorem 3.1., the numbers

-a 1 , ••• ,-a t are precisely the negative factorization indices

of We(A) = 1m + C(A-A)-I B• This fact enables us to prove the

following result due to I.S. Chibotaru [Chj.

COROLLARY 3.2. Let K1 ,oo.,Km ~ the factorization

indices ~ W relative ~ the contour r. Then, for ~ suitable

permutation K1 , ••• ,K of K1 , ••• ,K , there exists a factorization m- m -

of the form

Page 378: Constructive Methods of Wiener-Hopf Factorization

370 Bart, Gohberg and Kaashoek

(3.17) W(A) A € r,

where W_ 2:!...!.. minus function relative to rand W+ 2:!...!..~

function with respect ;0 r.

Again W_ adn W+ are mxm matrix functions. The type of

factorization appearing in (3.17) was considered for the first

time in a paper by G.D. Birkkoff [B]. In contrast with how the

situation is for Wiener-Hopf factorizations (3.16), the indices

Kl, ••• ,Km in (3.17) are not unique. Our proof will reflect this

fact.

PROOF. We may assume that all factorization indices

Kl, ••• ,Km are strictly negative. If necessary, this can be

achieved by multiplying W by a suitable negative power of

(A-e l )/(A-e 2 )·

Write W as the transfer function of a finite dimensional

node a = (A,B,C;X,~m) without spectrum on the contour r. Let

M,Mx be the pair of spectral subspaces associated with a and let a j t

{ejk}k=l,j=l be an outgoing basis for a with respect to the

operator A-e 2 • The assumption that all factorization indices are

strictly negative implies that M+M x = X, t = m and Kj = -a j ,

j = l, ••• ,m. This is clear from the results obtained in [BGK3]

and [BGK4]~ Choose sl, ••• ,sm and ul, ••• ,um as in Theorem 3.1. To

be more specific, sl, ••• ,sm is a permutation of l, ••• ,m and (3.3)

and (3.7) are satisfied (with t = m). But then

j l, ... ,m,

and so -Ul,···,-Um is a permutation of K'l' ••• , Km·

Let D be given by (3.1) with ul, ••• ,um as indicated

above, and choose a finite dimensional realization aD of D

Page 379: Constructive Methods of Wiener-Hopf Factorization

Multiplication by diagonals 371

having no spectrum on r. For instance one can take 8 D as in the ~ ~X

proof of Theorem 3.1. If M,M is the pair of spectral subspace

associated with the product node 0 8 D8, then (3.5) and (3.6)

are satisfied. Now M+M x = X and it follows that X = g & gx. But then Theorem 2.1 in Ch.I of [BGK3] guarantees that the

transfer function W of e admits a canonical (right) Wiener-Hopf

factorization with respect to the contour r, i.e., a

factorization for which all factorization indices are zero.

Since W is the product of D and W, we may conclude that W admits

a factorization of the form described in the corollary. Indeed,

for K1 , ••• ,K m one can just take the integers -u 1 , ••• ,-um (in this

order). 0

The diagonal factor in (3.17) is on the left. By the

same type of argument one can obtain a factorization where it is

on the right. Also there are factorizations in which the roles of

Wand W+ are interchanged. We omit the details.

[BGK1]

[BGK2]

[BGK3]

[BGK4]

REFERENCES

Bart, H., Gohberg, 1., Kaashoek, M.A.: Minimal

factorization of matrix and operator functions.

Operator Theory: Advances and Applications, Vol.I.

Basel-Boston-Stuttgart, Birkhauser Verlag, 1979.

Bart, H., Gohberg, I., Kaashoek, M.A.: Wiener-Hopf

factorization..£!. analytic operator functions and

realization. Rapport 231, Wiskundig Seminarium, Vrije

Universiteit, Amsterdam, 1983.

Bart, H., Gohberg, 1., Kaashoek, M.A.: Explicit Wiener­

~ factorization and realization. This volume.

Bart, H., Gohberg, 1., Kaashoek, M.A.: Invariants for

Wiener-Hopf equivalence of analytic operator functions.

This volume.

[B] Birkhoff, G.D.: A theorem ~ matrices ~ analytic

functions. Math. Ann. 74, 122-133 (1913).

Page 380: Constructive Methods of Wiener-Hopf Factorization

372 Bart, Gohberg ana Kaashoek

[Ch] Chibotaru, loS.: The reduction ~ systems ~ Wiener-Hopf

equations ~ systems with vanishing indices. Bull. Akad.

Stiince RSS Moldoven 8, 54-66 (1967) [Russian].

[CG] Clancey, K., Gohberg, I.: Factorization ~ matrix

functions and singular integral operators. Operator

Theory: Advances and Applications., Vol.3, Basel-Boston­

Stuttgart, Birkhauser Verlag, 1981.

H. Bart

Econometrisch Instituut

Erasmus Universiteit

Postbus 1738

3000 DR Rotterdam

The Netherlands

M.A. Kaashoek

Subfaculteit Wiskunde en

Informatica

Vrije Universiteit

Postbus 7161

1007 MC Amsterdam

The Netherlands

1. Gohberg

Dept of Mathematical Sciences

The Raymond and Beverly Sackler

Faculty of Exact Sciences

Tel-Aviv University

Ramat-Avid Israel

Page 381: Constructive Methods of Wiener-Hopf Factorization

Operator Theory: Advances and Applications, Vol. 21 © 1986 Birkhauser Verlag Basel

SYMMETRIC WIENER-HOPF FACTORIZATION OF SELFADJOINT RATIONAL MATRIX FUNCTIONS AND REALIZATION

M.A. Kaashoek and A.C.M. Ran 1)

373

Explicit formulas for a symmetric Wiener-Hopf factorization of a selfadjoint rational matrix function are constructed. The formulas are given in terms of realizations that are selfadjoint with respect to a certain in­definite inner product. The construction of the formulas is based on the method of Wiener-Hopf factorization developed in [2].

O. INTRODUCTION AND SUMMARY 0.1 Introduction. According to a relative recent result in the

theory of Wiener-Hopf factorization (see [7]) any m x m selfadjoint matrix function W(\) which is continuous and has a nonzero determinant on the ex­tended real line, admits a factorization of the form

(0.1) W(\) = W+(~)*l:(\)vJ+(\), -00 <; \ <; 00,

where W+(\) is analytic in the open upper half plane and continuous up to the real 1 ine, det W+(\) f. 0 for 1m \ :2: 0 (including \ = 00) and L:(\) is a selfad­joint block matrix of the following type:

(0.2) l:(\)

1) Research of second author supported by the Niels Stensen Stichting at Amsterdam.

Page 382: Constructive Methods of Wiener-Hopf Factorization

374 Kaashoek and Ran

Here 0 < Kl < •.• < Kr are positive integers and the non-negative numbers p and q are determined by the equal ities p-q = signature W(>") (which does not depend on >..) and p+q+2(ml + ... +mr ) = m. We shall call the factorization (0.1) a symmetric (Wiener-Hopf) factorization. A proof of this symmetric factori­zation theorem may also be found in [4].

In this paper we shall present an explicit construction of the sym­metric factorization for the case when the matrix function W(>") is rational. In particular, for a selfadjoint rational matrix function W(>..) we shall give

explicit formulas for the factor W+(>") , for its inverse W+(>..)-l and for the

indices Kl, ... ,Kr and the numbers p and q. To obtain our formulas we use the geometric construction of the

Wiener-Hopf factorization carried out in [2] (see also [3]). As in [2] the starting point for the construction is a realization of W(>") , i.e., an ex­pression of W(>") in the form

(0.3) W(>") = 0 + C(>..In - A)-lB, -00 < >.. < 00,

and the final results are stated in terms of certain operators which we derive -1 from A, B, C and 0 and certain invariant subspaces of A and A - BO C. In order

to obtain the symmetric factorization (0.1) by using the construction of [2]

it is necessary to develop further the construction given in [2] and to modify it such that it reflects the symmetry of the factorization.

The selfadjointness of the matrix function W(A) allows one (see [5,8]) to choose the realization (0.3) in such a way that HA = A*H and HB = C* for some invertible selfadjoint n x n matrix H. The indefinite inner product on ~n induced by the invertible selfadjoint operator H will play an essential role in the construction of the symmetric factorization.

This paper is divided into two chapters. In Chapter I we review and modify the construction of the Wiener-Hopf factorization of an arbitrary rational matrix function given in [2]. This first chapter is organized in such a way that the results for the selfadjoint case, which are derived in the second chapter, appear by specifying further the operators constructed in

Chapter I. 0.2 Summary. In this subsection W(>") is an m x m selfadjoint rational

matrix function which does not have poles and zeros on the real line including infinity. To construct a symmetric factorization of I~(>") we use the fact that W(A) can be represented (see [5,8]; also Chapter 2 in [1]) in the form

Page 383: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 375

(0.4) W(A) = DtC(A-AfIB, -oo:s A:S 00,

where A, B, C and D are matrices of sizes n x n, m x n, n x m and m x m, respec­

tively, D is selfadjoint and invertible, A has no eigenvalues on the real line and for some invertible selfadjoint n x n matrix H the following indentities hold true:

(0.5) HA = A*H, HB = C*.

The fact that W(A) has no zeros on the real line implies (see [1]) that like x -1 A the matrix A := A - BO C has also no eigenvalues on the real line. The

interplay between the spectral properties of A and AX will be an essential

feature of our construction of a symmetric factorization. First we consider the case of canonical factorization, when the

middle term L(A) in (0.1) is a constant signature matrix (i.e., the numbers ml, ... ,mr in (0.2) are all zero). Let M be the subspace of ¢n spanned by the eigenvectors and generalized eigenvectors of A corresponding to eigenvalues in the upper half plane, and let MX be the space spanned by the eigenvectors and generalized eigenvectors of AX corresponding to eigenvalues in the lower half plane. The identities (0.5) imply that HM = Ml and HM x = (MX)l. Thus HUI n MX) = (MtMx)l, and it follows that

dimM n MX = codimMH( = dim~. M t MX

In particular, M n MX = (0) if and only if ¢n is the direct sum of M and MX. The next theorem is a corollary of our main factorization theorem.

THEOREM 0.1. The ~~o~al m~x 6u~~o~ W(A) admLt6 a ~ymmetni~

~a~onZ~al W~e~e~-HoPh 6acto~z~o~ ~6 a~d o~£y ~6 M n MX = (0), a~d ~~ ~hat

~Me M~h a ha~~o~z~ol1 ~ g~vel1 by

(0.6) W(A) = [E + ED- l CIT(5:-Af l B]*L:[E + ED-lCIT(A-AflB],

whe~e IT ~ ~he. P~o j e~o~ a h ¢n = tl $ MX alo~g M oVLto MX a~d L ~~ a ~o~Ma~t ~~g~atu~e m~x wM~h ~ ~ol1g~ueVLt ~o D, the ~o~g~ue~~y bung g~ven by the.

~~v~ble. ma~x E (~.e., 0 = E*LE).

The case of non-canonical factorization (i.e., M n MX ~ (0)) is much more involved. First we choose subspaces N c M and NX c MX such that N (resp. r/) is a direct complement of ~1 n t·( in M (resp. MX). The identities (0.5) allow us to construct a direct complement K of M+Mx in ¢n such that (HK)l = N $ K $ NX. In particular,

Page 384: Constructive Methods of Wiener-Hopf Factorization

376 Kaashoek and Ran

For i = 1.2.3.4 let Qi be the projection onto the i-th subspace in the right

hand side of (0.7) along the other subspaces in this direct sum decomposition

of ¢n. The projections Qi are related as follows

* * (0.8) HQ1 = Q4H• HQ 2 = Q3H.

In M n MX one can choose a' s

a· S (see [2], Section 1.5) bases {d °k}k}1 '-1 and J - .J-

{e' k}k}1 0-1 such that J - .J-(i) 1 ~ a1 ~ ... ~ as;

(ii) (A-i )djk = dj .k+1 for k = 1 •...• aj -1;

(i i)' (A+i )ejk = ej .k+1 for k = 1 •...• a j -1; a o-1 s a·-1 s x

(iii) {dok}k~1 0-1 and {eok}k~1 '-1 are bases of r~ n ~1 n KerC; J -. J- J -. J-

and furthermore the vectors djk and ejk are connected by

(0.9) k-1 (k-1) . v e O k = L (21) do k- . J v=O v J. v

THEOREM 0.2. Let W(A) = (W+(~))*L(A)W+(A) be a ~ymm~Q 6actoni-zation. Then the in~Qe~ K1 •...• Kr appe~ng in the de~Qniption (0.2) 06 the middf..e teJtm Me pnewe1y the futinct numbeM in the ~equenQe a I •· ..• a s and the numbe~ ml •... ,mr in (0.2) Me detenmined by

(0.10) mo=#{i!ao=Ko}, j=I •...• r. J 1 J

In paAtiQui-M. ml + ... + mr = s.

From (0.9) one sees that Cdjao = Ceja for j = 1 •... ,5. Put zJo -1 -1 J j o Cd jaj = 0 cejaj' Then

-1 x x BZj=BD Cdjaj=(A-A)djajEM+M, j=1, ...• s.

and thus zl •...• zs is a linearly independent set of vectors in B-1(M+~() c Cm.

We shall prove that z1' ...• zs can be extended to a basis zl' ...• zm of Cm such

that zl' ...• zm_s is a basis of B-l(t~+MX) and

Page 385: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization

(0.11) m [<Dz. ,zk>]· k-1 J J, -

where the positive numbers m1, ... ,mr are given by (0.10).

377

We use the bases {d· k} and {e· k} to introduce the so-called out-J J x x -

going operators (see [2], 5ection 1.5) 51,52 : M n M .... M n t·, , as follows:

(51 - i )d j k = d j , k+ l' k = 1, ... ,o.j ,

(52+ i)ejk = ej ,k+1' k= 1, ... ,o.j'

where, by definition, djk = ejk = 0 for k = o.j+1. Thus relative to the basis {djk} (resp. {ejk}) the operator 51 (resp. 52) has a Jordan normal form with i (resp. -i) as the only eigenvalue and with blocks of sizes o.l' ... 'o.s. With 51,52 we associate operators G1 ,G2 : ¢m .... M f1 r·,x by setting

x -1 x Q2(A -51 +G1D C)x = 0, X E M n ~1;

-1 x Q2(A - 52 - G2D C)x = 0, X E M n M ;

j=l, ... ,s;

j = s+l, ... ,m.

Next, we define operators T1,T2 : K .... K and F1,F2 : K .... ¢m by

-1 * * Q3T1Q3 = H (Q252Q2) H, F1Q3 = -(Q2G2) H, ,-1 * * Q3T2Q3 = H (Q251Q2) H, F2Q3 = -(Q2Gl) H.

Note that Tl has i as its only eigenvalue and -i is the only eigenvalue of T2. The next theorem is our final result.

THEOREM 0.3. The ~el6adjoint ~onal matnix 6unct£on W(A) D + C(A-A) -l B admili the 6oUow..i.ng ~!!mme:tJr.,i.c. 6ac.toJUzaUon

Page 386: Constructive Methods of Wiener-Hopf Factorization

378

1p 0 o -I q

Kaashoek and Ran

whene K1 , ... ,Kr ane the d~tinct numb~ in the ~eQuenee a 1 , ... ,a s ' the num­

ben mj ~ equal to :the numben 06 tim~ :the -index kj appe~ in :the ~eQuenee

a1, ... ,as ' :the non-nega:t-ive numb~ p and q ane de:tenm-ined by p-q = signature 0 and p-q = m-2s, and :the 6actM W+(A) and ill inve~e W+(Af1 ane

given by

W+(A) = E + ED-1(CQ3 + CQ4 + F2Q3)(A - Af1B +

+ ED- 1CQ2(A - S2f1V(A - Af1B + ED-1CQ2(A - S2 f1 (Q2B - G2),

W+(Af1 = C 1 _ 0-lCp .. - AXf1(Q2B + Q4B - G2)E-1 +

- D- 1C(> .. - AXf1vx (>.. - T2f1Q3BE-I + -1 -1 -1 -0 (CQ3+F2)(A-T2) ,Q3BE .

Hene E ~ the inv~e 06 :the m~x [zl"" ,zm] and -1 -1

V = Q2AQ4- G20 CQ4QQ2AQ3-HQ2B-G1)0 (CQ3+F2) + -1

-(Q2B- G1- G2)0 F1,

x x -1 ( -1 V = Q4A Q3- Q4BO F2+!Q2AQ3-~ Q2B- G1)0 (CQ3+ F2) +

-1 -G10 (CQ3+ F2)'

A somewhat less general version of Theorem 0.3 has appeared in Chapter V of [9]. The fact that W(A) is rational is not essential for our formulas. Theorem 0.3 also holds true for a selfadjoint matrix function which is analytic in a neighbourhood of the real line and admits a representation of the form (0.4), where A is now allowed to be a bounded linear operator acting on an infinite dimensional Hilbert space. Of course in that case it is necessary to assume that dimM n MX is finite.

Page 387: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 379

I. WIENER-HOPF FACTORIZATION

Throughout this chapter W(\) is a rational m x m matrix function

which does not have poles and zeros on the real line. By definition (see [6,4])

a (right) Wiener-Hopf factorization of W(\) relative to the real line is a

representation of W(\) in the form

W(\) ~J-'\) _00 <:: \ <:: 00,

where W_(\) and W+(\) are rational matrix functions such that W+(\) (resp. W_(\)) has no poles and zeros in the closed upper (resp. lower) halfplane,

including the point infinity. In this chapter we review and modify the con­struction of the Wiener-Hopf factorization given in [2].

1.1 Realizations with centralized singularities The first step of the construction in [2] is to represent W(\) in

the form

(1.1) W(\) = D +C(\ - AflB, -00 <:: :\ <:: 00.

Here A, B, C and D are matrices of sizes n x n, n x m, m x nand n x m, respec­tively, the matrix A has no eigenvalues on the real line and D is invertible.

The fact that W = W(·) has no zeros on the real line implies (see [1]) that AX := A - BD-1e has no eigenvalues on the real line. Given (1.1) the sextet

8 = (A,B,e,D;~n,~m) is called a realization of W with main operator A and state space ~n. The matrix AX is called the associate main operator of 8. (In what follows we shall often identify a p x q matrix with its canonical action as an operator from ~q into ~p.)

In [2] the Wiener-Hopf factorization of W is obtained by using the geometric factorization formulas of Theorem 1.1 in [1]. This requires a reali­zation of W with so-called centralized singularities. The realization 8 = (A,B,e,D;~n,~m) of W is said to have centralized singularities if the following

properties hold true (see [2], Section 1.3): (i) the state space ~n has a decomposition ~n = Xl $ X2 $ X3 $ X4 and re­

lative to this decomposition the operators A, AX, Band e can be written as

Page 388: Constructive Methods of Wiener-Hopf Factorization

380

(1. 2)

Al *

A = 0 A2 0 0

0 0

BI

B = B2 ,

B3

B4

* * 0 * A· 3 * 0 A4

AX 1 0 0 0

AX = * AX 0 0 2

* 0 AX 0 3

* * * AX

4

where the entries satisfy the following conditions:

Kaashoek and Ran

(ii) the eigenvalues of Al and Ai are in the upper half plane, the eigen­

values of A4 and A~ are in the lower half plane;

(i i i) A2 - i and A2 + i have the same ni 1 potent Jordan form in bases a· t a· t

{djk}k:A j=1 and {ejk}k;1 j=I' respectively, more precisely:

(1.3.i) (A2 -i)djk = dj ,k+l' k=I, ... ,aj (dj 'aj+l = 0),

(1.3.ii) (A~+i)ejk = ej ,k+l' k=I, ... ,aj (ej ,aj+l = 0),

where it is assumed that a 1 ~ ... ~ at; and further these two bases are

related by

k-l (k-l) . v (1.4. i) eJ' k = L (21) dJ. k- ' v=O v , v k-l (k-l) . v (1.4.ii) d' k = L (-21) e. k ; J v=O v J, -v

(iv) rank B2 = rank C2 = t;

(v) A; - i and A3 + i have the same nilpotent Jordan form in bases w· s Wj s. .

{fjk}k..J1 j=1 and {gjk}k=1 j=l' respectlVely, more prec1sely:

(1.5.i) (A;-i)fjk = f j ,k+l' k=I"",wj (fj ,wj+l = 0),

(1.5.ii) (A3 +i)gjk = gj,k+l' k=I"",wj (gj,wj+l = 0),

where it is assumed that WI ~ ... ~ ws; and further, these two bases

are related by

k-l (V+W'-k) (1.6.i) gJ'k = L J (-2i)vfJ. k- ' v=O v , v

k-l (V+wj-k) . v (1.6. ii) f J' k = L (21) 9J· k- ; v=O v , v

(vi) rank B3 = rank C3 = s.

Page 389: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 381

Note that the order of a1, ... ,at is different from the order used

in [2]. With these notations one has the following theorem (see Theorem 1.3.1

in [2]).

THEOREr1 1.1 Let e = (A,B,C,D;¢n ,¢m) be a ltealiza;tioYl 06 W wdh

c.eYl:Utaliz ed ,/,-tYlgulaJU.;tiv.,. Put

-1 W_(\) = D+C1(\-A1) B1' -1 -1 W+(\) = I + D c4(\ - A4) B4,

-1 -1 -1 -1 D(\) = 1+D C2(\-A2) B2+D C3(\-A3) B3 .

TheYl W(\) = Wj\)D(\)W+(\), -00 ~ \ ~ 00, the 6ac.tOft W+(\) (ltv.,p. Wj\)) ha;., YlO

pole;., aYld zeILO'/' -tYl the c.lo,/,ed uppelt (ltv.,p. lowelt) hal6 plaYle aYld at -tYl6-tvtdy,

aYld 60ft ;.,ome -tYlve.JLt,i.ble m x m matJt-tx E

-1 . ((\_i)-a1 (\_i)-at (\_i)Wl (\_i)WS) ED(\)E = dlag\\Hi , ... , Hi ,1, ... ,1, m '''.',"ffi

111 pafttic.ula!t, modulo a ba;.,-t;., :Uta/1,/, 60ltma;tioYl -tYl ¢m, the 6ac.toJt-tza;tiOYl

W(\) = W_(\)D(\)W+(\) -t;., a Jt-tght WieYleIL-Hopn 6ac.toJt-tza;tioYl 06 W Itela;tive to

the Iteal liYle.

Except for minor changes involving the operator D, the proof of the above theorem can be found in [2]. With Theorem 1.1 the problem to factorize W is reduced to the construction of a realization of W with centralized sin­gularities. Such a construction, which has to start from the representation (1.1), will be carried out in the next sections. First we collect together the necessary data, on the basis of which, by dilating the original realization, we construct a new realization with centralized singularities.

1.2 Incoming data and related feedback operators In this section e = (A,B,C,D;¢n,¢m) is a realization of W. So A and

x -1 x A = A - BD C have no real eigenvalues. Let P (resp. P ) be the spectral pro-jection of A (resp. AX) corresponding to the eigenvalues in the upper half

x x plane. Put M = ImP and M = Im(I-P).

We start this section by defining the sequence of incoming subspaces for e, as follows:

x j-l H.=M+M +ImB+ ... +ImA B, j=O,1,2, ... J

x Here HO = 11 + ~1 . Let E be a complex number. Note that the spaces H j do not change if A is replaced by either A - E or AX - E. It follows that

Page 390: Constructive Methods of Wiener-Hopf Factorization

382 Kaashoek and Ran

Since Hn = ¢n, this identity can be used to construct an incoming basis for

A-E (see [2], Section 1.4). A set of vectors {fjk I k = 1, ... ,wj , j=I, ... ,s}

is called an incoming basis with respect to A - E if

(i) 1 S wI S ••• S ws;

(ii) (A-E)fjk -fj ,k+l E M + r( + 1mB, k=I"",wj where f j ,wo+l = 0 by definition; J

(iii) the vectors f jk , k=I, ... ,wj , j=I, .. o,s form a basis for a complement of M + MX;

(iv) the vectors f ll , ... ,fs1 form a basis for t1 + MX + 1mB modulo M+Mx

The numbers w1' ... ,ws are called the incoming indices; they do not depend on

either E or the choice of the basis.

Let {fjk};jl j:l be an incoming basis with respect to A - E. Denote

by K the span of the vectors f j k' Then K is a complement to ~1 + MX. Connected

with this incoming basis is an incoming operator T: K -+- K given by (T - E)fjk =

fJo k+l for k=I, ... ,wJo (here, again, f Jo wo+l = 0). Note that with respect to , Wo S ' J

the basis {fjk}k~1 j=1 the operator T has Jordan normal form with E as the

only eigenvalue and Jordan blocks of size w1'.00,ws .

The next proposition shows how a given incoming basis with parameter

i may be transformed into an incoming basis with parameter -i (see [2],

Proposition 1.402). Wo s

PROPOSITION 201. La {fjk}k;l j=l be an incoming ba.6-u' 60Jt e w.i...th Jte.6 pec.t :to A - i. Put

k-1 (v+wo-k) 0 v (2.1) gJok = r J (-21) fo k 0

v=O v J, -v

Then {gjk};J1j: 1 -U, an incom.i.ng ba..6-u' 6M e w.i...th Jte.6pec.t:to A+i. FuJt:the.Jt, '<.6 T 1 and T 2 Me. :the. incom.i.ng 0 pe.Jta.:tOJt.6 a..6.60 ua.:te.d w.i...th :the.6 e. ..tnc.om.i.ng ba..6 e.6 ,

Jte.6pe.c.tivei.y, :the.n TI and T2 .6a.:t.i..66y

(2.2) (TI -T2)gjk = -(-2i)kC~)gjI 6011. k = 1, ... ,wj ' j = 1, ... ,s.

By direct checking one proves the following analogue of (2.1):

k-l (V+wo-k) v (2.3) fjk = v;O J (2i) gj ,k-v'

It follows that (2.2) may be replaced by

(2.4) o k(WO) (T1 - T2)fjk = (21) ~ f j1 .

Page 391: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 383

W" s Now let {fjk}k;l j=l be an incoming basis for A - i and construct the

incoming basis for A+i by (2.1). Next, let Y1""'Ys be vectors in ¢m such

tha t for j = 1, ... , s:

(2.5) f "I - By" = g'l - By" E M + MX • J J J J

Since the vectors fll, ... ,fs1 form a basis for M + r( + 1mB modulo MH( such ¢m -1 x

a set of vectors {Y1""'Ys} exists and is a basis for modulo B [M+M];

in particular

¢m -1 x (2.6) =span{Yl""'Ys}QlB [M+M].

In this way we have fixed a set of incoming data: w" s w" s

(2.7) {fjk}k;lj=l' {gjk}k=J1j=1' {Yj}j:1'

Let K be the span of the vectors f jk . Two operators F1 ,F2 : K -+ ¢m

are called a pair of feedback operators corresponding to the incoming data

(2.7) if

(2.8)

(2.9)

(2.10)

Such a pair of operators can be constructed as follows (see [2], Section 1.6).

First note that (A - T1)fjk E M + r1x + 1m B. So there exist vectors ujk in ¢m

such that (A-Tl)fjk+Bujk E M+Mx. Define F1 : K -+ ¢m by setting Flfjk = Dujk .

Then Fl satisfies (2.8). Next, choose F2 such that (2.10) holds. This defines

F2 uniquely. It remains to show that (2.9) holds. Indeed, using (2.4) and

(2.10) we have: x -1

( A - T 2 - BD F 2 ) f j k = -1 -1

(A - T 1 + BO F 1 ) f j k + (T 1 - T 2 ) f j k - BO (C + F 1 + F 2 ) f j k =

-1 " k(WJ") x = (A-T1 +BD F1)fjk + (21) k (fj1 -BYj) E M+M .

1.3. Outgoing data and related output injection operators

In this section we make the same assumptions concerning e as in the

first paragraph of the previous section.

~Je begin by considering the sequence of outgoing subspaces for e, given by

x j-l Kj=r'lnM nKerCn ... nKerCA , j=0,1,2, ....

Page 392: Constructive Methods of Wiener-Hopf Factorization

384 Kaashoek and Ran

x Here KO = M n M The spaces Kj do not change if A is replaced by A - £ or

AX - £. Hence:

-1 Kon(A-£) Ko=Ko l • j=O.1.2 •....

J J J+

Using this identity one can construct an outgoing basis for A - £ (see [2].

Section 1.5). A set of vectors {djk I k=l •...• a j • j=l •...• t} is called an

outgoi ng bas is wi th respect to A - £ if

(i) 1 sal s ... sat;

(ii) (A-£)djk = dj,k+P k=l, ...• aj-l;

(iii) the vectors djk • k=l •...• aj.j=l •...• t form a basis for M n MX;

(iv) the vectors djk • k=l •...• arl. j=l •...• t form a basis for Kl .

Note that the order of al •...• a t is different from the order used

in [2].

The numbers a1 •...• a t are called the outgoing indices; they do not

depend on either £ or the choice of the outgoing basis.

Connected with the outgoing basis {djk};3l j~l is an outgoing

operator S: M n MX -+ M n MX given by (S - £)djk = dj .k+l for k = 1 •...• a j •

j = 1 •...• t. Here dj •a 0+1 = o. The operator S has Jordan normal form with £ as

the only eigenvalue a~d Jordan blocks of size al •...• a t with respect to the b 0 {d } aj t aS1S jk k=lj=l·

The next proposition is the analogue of Proposition 2.1 (see [2).

Proposition 5.2). a o t

PROPOSITION 3.1. Let {djk}k;l j=1 be an orLtgoing b0..6-iA 601t e w.U.h

Ituped :to A - i. PrLt

k-1 (k-1) 0 v (3.1) ejk = v~O v (21) dj •k- v ·

Then {ejk}~l j~l -iA an orLtgoing b0..6-iA 60lt e wah Ituped :to A + i. FWLtheJt,

i6 Sl' and S2 alte :the orLtgoing opeJta:tOIt6 o..6.60cia:ted wah :thue orLtgoing b0..6e-6,

Itupedivei.y, :then Sl and S2 -6a:t-<A6y

ar l (a O

) 0 v+1 l(Sl - S2)ejaJo = v~O v:1 (21x) dj .aJo-v· (3.2)

(Sl - S2 ~x = 0 60lt X E M n M n Ker C.

By direct checking one also sees that

k-1 (k-1) 0 v (3.3) djk = v~O v (-21) ej,k-v·

Hence the first formula in (3.2) may be replaced by

Page 393: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 385

(3.4) a r 1 (a.) . v+ 1

(S1- S2)dja. = - L ;1 (-21) eJ .. _. J v=O v ,aJ v

Let {d ·k}kaj1 ·~1 be an outgoing basis with respect to A - i. and let a· t J - J-

{ejk}k;lj=1 be the outgoing basis with respect to A+i given by (3.1). Put Zj = D-ICeja.=D-1Cdja ., j=I, ... ,t. In this way we have fixed a set of

. d J J outgo1ng ata: a· t a· t t

{d· k}k Jl· l' {e· k}k Jl· l' {z·}·I· J = J= J = J= J J= (3.5)

Note that the vectors {z·}·~l form a basis of D- IC[M n r{]. Since -1 x x J J- -1 x

BD C[MnM] c M+M, we have Zj E B [M+M] for j=I, ... ,t. Let Yo be a complement of span{Zj}j~1 in B-l[M+Mx ]. Furthermore, choose a complement Yl of B- I U1+Mx ] in ¢m.

Two operators Gl and G2 mapping ¢m into M n MX are called a pair of output injection operators corresponding to the outgoing data (3.5) if

(3.6)

(3.7)

(3.8)

x -1 p(A - 51 +G1D C)x = 0,

-1 p(A-52 -G2D C)x=O,

p(B-G1 -G2)y = 0,

xEMnr{,

(3.9) p(B-GI -G2)Zj = (52-51)djaj' j=I, ... ,t.

Here p is a projection of ¢n onto M n MX, which we assume to be given in ad­

vance. Of course, the definition of Gl and G2 does not depend only on the outgoing data (3.5) and the related outgoing operators 51 and 52' but also on the choice of the complements YO and YI and the projection p.

To construct such a pair of operators, let G2 be an operator for which G2zj := p(A+i)ejClj for j=I, ... ,t. Then G2 satisfies (3.7). Construct G1 by (3.8) and (3.9). This determines Gl completely, and it remains to show that (3.6) holds. To do this first note that (3.9) and (3.2) together imply that

-1 x p((B-G1 -G2)D Cx-(52 -51)x)=0, XEMnM'.

Subtracting this from (3.7) yields (3.6). 1.4. Dilation to realizations with centralized singularities In this section we show by construction how an arbitrary realization

e of W may be dilated to a realization with centralized singularities. In the next three paragraphs we fix sets of incoming and outgoing date and the cor­responding spaces and operators. On the basis of this information we shall introduce the dilation.

Page 394: Constructive Methods of Wiener-Hopf Factorization

386 Kaashoek and Ran

Throughout this section e = (A.B.C.D;¢n.¢m) is an arbitrary reali-. Clj t Clj t

zation of W. Furthermore. {ejk}k=l.j=I:~d {djk}k=l,j=I:~e outgoing bases for A+i and A-i. respectively. and {fjk}k}l.j:l and {gjk}k}l,j:l are incoming bases for A-i and A+i, respectively. Put Zj = D- ICejCl" j=I, ... ,t, and let Yj' j = 1 •... ,s, be such that fjl - BYj = gjl - BYj E M + ~x. In this way we have fixed a set of incoming data:

w' s w· S s (4.1) {fjk}k}l,j=I' {gjk}k}l,j=I' {Yj}j=1

and a set of outgoing data: Cl· t Cl· t

(4.2) {djk}k~1 ,j=I' {ejk}k~l.j=I' t

{z .}. l' J J=

Let Tl • T2 and 51' 52 be the corresponding incoming and outgoing operators, respectively.

w' s The subspace of X spanned by the set of vectors {fjk}k~l,j=1 will be

denoted by K. Choose subspaces Nand NX such that t~ = N e (M e W) and MX = = NX e (M e MX ). Then X has the following direct sum decomposition:

X = N e (M n MX) eKe NX.

Denote by Qj the projection on the j-th space in this decomposition along the other spaces (j = 1,2,3,4).

Furthermore, let F1 ,F2 : K .... ¢m be a pair of feedback operators cor­responding to the incoming data (4.1), and let Gl'G2 : ¢m .... t·1 n MX be a pair of output injection operators corresponding to the outgoing data (4.2). The operator p appearing in the formulas (3.6) - (3.9) (which define the o:)erators Gl ,G2) is chosen to be Q2 and for the space Yl in (3.9) we take the space spanned by the vectors Yl, ... 'Yt (cf. formula (2.6)). The choice of the space YO in (3.9) is not restricted. Put

(4.3) X = (M n MX) e (M n MX) e ¢n eKe K.

Consider operators A: X .... X. B: ¢m .... X and C: X .... ¢m given by

51 0 AlO A13 A14 Gl 0 52 A20 A23 11'24 G2

A = 0 0 A A03 A04 , B = B

0 0 0 Tl 0 0 0 0 0 0 T2 0

t = (0 0 C Fl F2) .

Page 395: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 387

Also, -x

write for A - - -1-= A - SO c:

51 0 x

AlO x

A13 x

A14

0 52 x

A20 x

A23 x

A24 AX 0 0 AX x

A03 x

A04

0 0 0 T1 0

0 0 0 0 T2

Here 51' 52' T1, T2, F1, F2, G1 and G2 are defined in the preceding paragraphs. The sextet 8 = (A,B,C,O;X,¢m) is a dilation of e and hence again a realization

of W. We shall show that for a suitable choice of the operators Aij appearing in the formula for A the realization8 has centralized singularities.

THEORH14.1. VeMne the opefLa;tofW AU fu\ collow;.,. Fillt,;.,et

x -1 (4.4) A10 (01 +02) = 0, AlO04 = 02A 04 +G10 C04;

x -1 (4.5) (03 + 04)A04 = 0, 0lA0403 = -OlA 03 + 0l BO F203;

-1 (4.6) A20(02+04) = G20 C(02+04)' A2001 = 02A01;

-1 (4.7) (01 +03)A03 = (01 +03)BO F1, 04A03Q3 = -Q4AQ3

Next, pu:t

(4.8)

(4.9)

(4.10)

(4.11)

Then e To prove Theorem 4.1 we start with an investigation of the spectral

properties of A and AX. Note that whatever the choice is of the operators A .. _ -x - -x lJ

the operators A and A have no real eigenvalues. Let P and P be the spectral projections of A and AX, respectively, corresponding to the upper half plane. Then

Page 396: Constructive Methods of Wiener-Hopf Factorization

388

0 PIO P13 Pl4

0 0 P20 P23 P24

P = 0 0 P P03 P04 1 - V

0 0 0 0

0 0 0 0 0

0

0

0

0

0

o

o

o

o

Kaashoek

x PIO

x P13

x P20

x P23

(I _px ) x

P03

0 0

0 0

and Ran

x Pl4

x P24

x P04

0

where P (resp., pX) is the spectral projection on A (resp., AX) corresponding to the upper half plane.

LEMMA 4.2. Independen~ 06 the ehoiee 06 the op~ato~ Aij we have:

u

P20m+P 23x

M = { m+P03x I m E M, x E K, u E M n MX}' x o

u

M = ·X {

x

x PIOa P20a

~, n MX = {a I a E M n MX}, o o

xl x2

M+Mx = {Z+P03X3+P~4x4 I x1,x2 E M n ~(, x3,x4 E K, Z E M+Mx}.

x3 j x4

PROOF. Suppose R = (xI,x2,xO,x3,x4)T E Im~ Then x4 = o. Since Px = x the vector x satisfies the identities

Page 397: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization

P20xO + P23x3 = x2'

PxO + P 03x3 = xO.

From p2 = P one obtains

(4.12) P20P03 = 0, PlOP = 0, PP03 = 0,

PlQP03 + P13 = 0, P20P = P20 .

So P20xO = P20Px O' and putting m = Px O' x = x3 one obtains that x has the desired form.

Conversely, for every m E M, x E K and u E M n MX one has

u u+P10m+P10P03x+P13x u P20m+P 23x P20m+P20P03x+P23x P20m+P 23x

P m+P03x Pm+PP03x+P03x m+P 03x x x x 0 0 0

according to the formulas (4.12). Hence M has the desired form.

389

Using the fact that pX is a projection one obtains the formula for -x - -x --x M . The formulas for M n M and M + t1 now easily follow. D

LE~l~lA 4.3. AMume (4.4) - (4.7) hold. TheY!

( i ) AlQP = 0, (I - P )A04 = 0;

(,.,.) pXAx 0 AX (I pX) 03 =, 20 - = 0;

(iii) P~O = Q2(1 - pX), P20 = Ql; x x

(iv) P03 = (I - P)Q3' P04 = P Q3.

PROOF. From (4.4) and (4.6) it is immediately clear that AlOP = 0 x x x

and A20(I-P) = 0, because ImP = Im(Ql+Q2)' Im(I-P) = Im(Q2+ Q4). Further, (4.5) implies ImA04 c Im(Ql+ Q2) = Ker(I-P), so (I-P)A04 = o. Similarly, pXA~3 = O. This proves (i) and (ii).

From (iii) and (iv) we only show P20 = Q2P, the other formulas are obtained similarly. From lip = PlI one obtains P20A = S2P20 + A20P. Since P20P = P20 ' it follows that P20 is a solution of the Lijapunov equation.

P 20PAP - SzP 20 = A20P.

Now PAP and S2 have no common eigenvalues, so P20 is the unique solution of this equation. We compute A20P from (4.6). Take x ( M, then

Page 398: Constructive Methods of Wiener-Hopf Factorization

390 Kaashoek and Ran

-1 A20x = A20 (Q1 + Q2)x = Q2AQ 1x + G2D CQ2x.

-1 According to (3.7) G2D CQ2x = Q2(A - S2)Q2x, so

A20x = Q2Ax - S2Q2x.

It follows that Q2P solves the Lijapunov equation mentioned above. Hence

P20 = Q2P. 0

LEMMA 4.4. A6.6Ume. (4.4) - (4.11) hold. The.n

PROOF. Take x E K. From Ap = rA one sees that

(4.13) S2P 23 + A20P 03 + A23 = P 20A03 + P 23 T 1 ;

S2P20 + A20P = P20A.

Using these identities and Lenvna 4.3 one obtains that P20Q3 + P23 satisfies

S2 (P 20 + P 23) x - (P 20 + P 23) T 1 x = Q2P (A - T 1 + A03 ) x - A23x - A20x

for all x E K. We claim that (A-T1 +AQ3)x EM. Indeed, (4.7) implies -1 x Q4(A-T1 +A03 )x = Q4(A+A03 )x = 0, and since (A03 -BD F1)x E M one has

Q3(A-T1 +A03 )x = Q3(A-T1 + BO-1F1)X which is zero according to (2.8). Hence

S2(P20+P23)x-(P20+P23)T1x = Q2(A-T1+A03)x-A23x-A20x =

= Q2Ax + Q2A03x - A23x - A20x.

From (4.8), (4.9) and (4.10) one sees that the right hand side of this equa­

tion is zero. Hence P20Q3 + P23 satisfies the L ijapunov equation

(4.14)

Since S2 and Tl have no common eigenvalues one obtains P20Q3 + P23 = O.

Similarly, one shows that

(4.15) SI(PioQ3+pi4)-(PioQ3+pi4)T2 = 0

which implies piOQ3 + Pi4 = O. 0

To show that A and AX have the desired triangular form we have to

introduce the spaces Xl' X2,X3 and X4 which appear in the definition of a

realization with centralized singularities. This will be done as follows:

Page 399: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization

o P20m+P23x

Xl = {m+P03X I x E K, m E M} ,

a a

x o

X2 = {a I a E M n MX} , o o

x

o o

X3 = {x I x E K}, x

x

LEr~MA 4.5. SUppO.6e. (4.4) - (4.11) hofd. The.Y!

X = Xl $ X2 $ X3 $ X4,

- -x M = Xl $ X2, M = X2 $ X4.

391

PROOF. (i) From Lemmas 4.2 and 4.3 one sees that X2 = M n r;( Clearly,

also Xl c M, X4 c AX. Choose x = (xI,x2,xO,x3,x4)T E X. Consider the equations x x x

xl = a + PlOm + P14Y2

x2 = a + P20m + P23Yl

Xo = a+m+P03Yl +mx+P~4Y2+x

x3 = Yl + x

x4 = Y2+ x

where YI'Y2'x E K, m E M, mX E MX and a E M n MX. With respect to the canoni­

cal decomposition X = N $ (M n MX) $ K $ NX we have, using Lemma 4.3:

Xo = (I-Q2)(m-PYI) + (a+Q2m+Q2mx-Q2PYI - Q2(I- px)Y2) +

(x+YI +Y2) + (I - Q2)(mx - (I - pX)Y2)·

x (x So Q3xO = x + YI + Y2 and Q2xO = a + Q2m+ Q2m - Q2PYI - Q2 I-P )Y2. From this and

Page 400: Constructive Methods of Wiener-Hopf Factorization

392 Kaashoek and Ran

Lemmas 4.3 and 4.4 it follows that

(4.16)

x = x3 +x4 - Q3xO'

Yl=x3 -x, Y2=x4 -x,

a = Xl +x2 - Q2xO'

m = QlxO + x2 - a + PYI '

mX = Q4xO+ xl- a +(I-p x )Y2'

This shows that X = Xl $ X2 $ X3 $ X4, and also the decompositions for M and MX are clear. 0

PROOF of Theorem 4.1. First we show that with respect to the de-~ ~ -x

composition X = Xl $ X2 $ X3 $ X4 the operators A and A have the desired

triangular form. To show that A has triangular form we need to show AX I C Xl' AX 2 C Xl $ X2 and AX3 C Xl $ X3' Similarly, for the triangular form of AX we have to show AXx4 C X4' AXx2 C X2 $ X4 and AXx3 C X3 $ X4.

Take a vector in Xl and compute

o P20m+P 23x

A m+P 03x

X

o

AlOm+(AlOP03+A13)x

(Sl20+A20)m + (S2P 23+A20P 03+A23)x Am + (AP 03+A03)x

T1x o

Now AlOP 03 + Al3 = AlO(I - P )Q3 + A13 = AlOQ3 + Al3 = 0 accordi ng to Lemma 4.3 and the formulas (4.8) and (4.9). Hence AlOm+(AlOP03+Al3)x = O. Next, using (4.13) and rewriting the second and third coordinates one obtains

o P20m+P 23x

A m+P 03x

x

o

o P20 (Am-APx+PT lx+(A-T l+AQ3)x) + P23 T IX

(Am-APx+PT 1 x+(A-T 1 +A03 )x) + P 03 T IX

Tlx o

Since (A-Tl +A03 )x E M for x E K it follows that AX I C Xl' Likewise one shows AX x4 C X4.

Next, for a E M n MX we have

Page 401: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 393

a Sla 0

a Sla A20a + (S2 - Sl)a

A a Sla + (A - Sl)a

0 0 0

0 0 0

Since A20a+(S2-S1)a = P20(A-S1)a the second term is in Xl. Similarly x x

a S2a P10 (A - S2)a

a S2a 0 i{ a S2a + (Ax - S2)a

000

o o o These formulas also yield property (iii) in the definition of a realization

with centralized singularities. The bases occuring in property (iii) are in­

duced by the outgoing bases in M n MX.

(4.17)

Next, take x E K and cons ider

o o

o o

A x T 2x + x T2x

x T2x

(AlO+A13+A14 )x

(A20+A23+A24)x

(A-T 1 +A03+A04+P (T 1-T 2) )x + P 03 (T 1-T 2)x (T1-T 2)x

o

Now (AlO + Al3 + A14 )x = o. Further using Lemmas 4.3 and 4.4:

P20(A-T1 +A03+A04+P(TI-T2))x+P23(TI-T2)x =

= P20(A-TI +A03+ A04)x = Q2(A+A03 +A04 )x.

From (4.8) - (4.11) one easily sees that the latter term equals

(A20+A23+A24)x. It follows that the last term in (4.17) is an element of Xl. Likewise for x E K we have

0 0 x x x x x x

P10(A -T2+A04+A03+(I-P XT2-T1))x + PI4 (T2-T1)x 0 0 0

(4.18) AX T1x x x x x x

X + (A -T2+A04+A03+(I-P XT2-Tl))x+P04(T2-Tl)x x Tlx 0

x Tlx (T2-T1)x

Formulas (4.17) and (4.18) also yield property (v), where the desired bases in

Page 402: Constructive Methods of Wiener-Hopf Factorization

394 Kaashoek and Ran

X3 are induced by the incoming bases in K. Further, property (ii) is also clear from the fact that M = Xl e X2,

MX = X2 e X4. It remains to prove properties (iv) and (vi), which is done in the next lemma. 0

LEMMA 4.6. Su.ppO.6e (4.4) - (4.11) hold, and let Qi be :the plWjec;tion 06 X onto Xi aiong :the ¢pa~e6 Xj , j f i. Then:

u (Y)1 u(y)

Tl wheJr..e

o o

b(YJ b(y) b(y)

In pan:tic~, rank Q2B = t and rank Q3B = s. FuntheJr..,

rank CQ2 = rank CiM n W = t,

rank CQ3 = rank (C+F1 +F2)iK = s.

PROOF. Clearly, rank CO2 = t. Further, from (2.10) we obtain rank C03 = s.

Using formulas (4.16) we have u(y) = (G1 + G2 - Q2By) and b(y) = -Q3By. The formula for u(y) is easily obtained from (3.8) and (3.9).

-1 Take y = D (C+F1 +F2)x, x E K. Then -1 x x

By = BD (C + F 1 + F 2 ) x = (A - T 1 + A03 ) x + (T 1 - T 2 ) x + (A - T 2 + A04) x .

Since (A-T1 +A03 )x EM and (Ax-T2+A~4)x E MX, we have Q3By = (T1 -T2)x. Next, take y E B-1(M+Mx). Then Q3By = O. This proves the formula for b(y).

Using (2.4) and (3.2) it easily follows that rank Q2B = t and rank Q3B = s. 0

Note that the operator A in Theorem 4.1 is completely arbitrary, so we might even have set A = O. However, the choice of A will play an important role in Chapter II, and there the choice A = 0 will not be suitable.

Page 403: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 395

1.5. The final formulas

In this section we apply the factorization formulas of Theorem 1.1

to the dilation 8 constructed in Theorem 4.1. This yields the final result of

the chapter, which is an improved version of Theorem I.10.1 in [2].

THEOREM 5.1. Lv: W(A) = D+C(A-A)-l B be a ftaUonat maVUx £unc.tion

wfUc.h -L6 ftegulM a:t inb-trU:tIj, and CUf.,Ume :tha:t A and AX : = A - BD -1 C have no

fteat eigenvatu~. Then W admi~ a fLight Wieneft-Hop£ £ac.:tofLizaUon W(A)

WjA)D(A)W+(A), -00 " A " 00, 06 wfUc.h :the 6ac.:toM Me given by:

W_(A) = D+C(A-Af1((Ql +Q2)B-G1)+C(A-Af1VjA-T1f1Q3B +

-1 +( CQ3+ F1)(A-T1) Q3 B;

WjA)-l = D- 1 _ D- 1(C(Q1 + Q3) + F1Q3)(A - AXf 1BD- 1 +

- 0-lCQ2 (A - 51(1«(A - AXf 1BO- 1 +

-1 -1 -1 -0 CQ2(A-51) (Q2B-G1)0 ;

-1 -1 W+(A) I+O (C(Q3+ Q4)+F2Q3)(A-A) B+

-1 -1 -1 -1 -1 +0 CQ2(A-52) V+(A-A) B+O CQ2(\-52) (Q2B-G2);

W+(Af i = 1- O-lC(A - AXf I((Q2 + Q4)B - G2) +

-1 x -1 x -1 -1 -1 -0 C(A-A) V+(A-T 2) Q3B-0 (CQ3+ F2)(A-T2) Q3B;

E-IO(A)E = diag (G~~r~:··· ,G~uat,l, ... ,l ,G~-iJ:··· ,(ft+r) . /-IeJte T l' T 2 Me :the inc.oming OpeJta:tOM, 51 ,52 :the outgoing OpeJta:tOM, F 1 ,F 2 :the

6eedbac.k Opefta:toM, G1,G2 :the output injec..uon OpeJta:tOM and Q1,Q2,Q3,Q4 :the

pftO j ec..uOn6 wh-tc.h wefte in:tfLoduc.ed in :the £iJt-6:t pMagJtapM a £ :the pftevioU-6

-6ec.tion. FuJt:theftmofte,

-1 -1 V_ = (Ql +02)A03 + (Q1 +02)80 Fl + A - G10 F1 ,

x x -1 V_ = Q2A (QI + Q3) + GIO C(Ql" 03) + AQ3'

-1 -1 V+ = Q2A04 - G20 CQ4 - AQ 3 - (Q2B - G1 - G2)0 FI ,

x x -1 -1 V+=Q4AQ3-Q4BO F2 -A-G10 (CQ3+ F2)'

wheJte A -L6 an MbdfLMIj a peJta:tOft £ftom Kinta M n MX. F inaLtlj,

Page 404: Constructive Methods of Wiener-Hopf Factorization

396 Kaashoek and Ran

whene zl, ... ,Zt and Y1""'Ys ~e the vecto~ appe~ng in the ~et6 06 out­going and in~oming data (4.1) and (4.2), ~~pe~vely, and Zt+1, ... ,zm-s 60~m

a bM~ nM a ~ompiement YO 06 span {Zj}l=l in B-1[MH,x]. PROOF. The idea of the proof is essentially the same as in the proof

of Theorem I.10.1 in [2]. We apply Theorems 1.1 and 4.1. So, let A, S, e and -x - - -1-A = A - BD C be as in the previous section, and assume that the entries Aij in A are defined as in Theorem 4.1. Furthermore, take Xl' X2, X3 and X4 as in the paragraph preceding Lemma 4.5. Theorem 1.1 applied to the realization e = (A,B,C,D;X,¢m) yields a right Wiener-Hopf factorization W(A) W_(A)D(A)W+(A), _00 ~ A ~ 00, with

-- - -1--WjA)=D+C01(A-Alx1) °lB,

W+(A) = I + D- 1C04(A - Alx4f104B,

where 01 and 04 are the projections introduced in Lemma 4.6. From Lemmas 4.3 and 4.4 it follows that we may write

0 P20m

= { I m E M, x E K}, Xl m+x x 0

Defi ne J : M Ell K -+ Xl by

o P20m m+x

x

o

PlOm x

0

X4 = { x m +x 0 x

Then J is invertible, and we have 0lB = JB_, C01J = C_ and AIx1J = JA_, where

AIM (01 + 02 XA + A03 )03 (01 + 02)B - G1 A = B =

o T1

C = (C(Ol +02) C03 +F1).

It follows that WjA) = D + CjA - A_f1B_. Calculating this gives the formula

for W_ with V_ = (01 +02)(A+A03 )03' By using the definition of A03 (see

Page 405: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 397

Theorem 4.1) one gets the desired expression for V_. -1 -1 -1 x -1 -1 x -1

Further, WJA) = 0 - 0 CJA - AJ B_D ,where A_ = A_ - B_D C_.

Next, one calculates A:, B_ and C_ relative to the decomposition

(M n MX) $ Im(Q1 +Q3) (= M+K). This yields

VX 1

o (Q1 + Q3)AX(Q1 + Q3)j ,

C = (CQ2 C( Q1 + Q3) + F 1 Q3) .

B = Q2B - G1 1

(°1 + Q3) B '

From these identities the formula for W_(A)-l is clear.

Analogously, define JX : MX $ K -+ X4 by

x

Then JX is invertible, and Q4B = JXB+, CQ4Jx = C+ and AX!X4 JX = JXA:, where

AX = AX!W (Q2+Q4XAX+A~4)Q31, B+ = (Q2+ Q4)B-G 2

+ 0 T 2 J Q3B

C+ = (C(Q2+ Q4) CQ3+F2)' -1 -1 x -1 -1

Since W+(A) = 1-0 C+(A - A+) B+, we obtain the formula for W+(A) with

V: = (Q2+ Q4)(Ax +A84)Q3' Using the definition of A04 (see Theorem 4.1) it follows that

Page 406: Constructive Methods of Wiener-Hopf Factorization

398 Kaashoek and Ran

which (because of (2.10) and (3.8)) yields the desired formula for V:. -1 -1 Next, we compute W+(A)=I+D C+(A-A+) B+. To do this we write A+=

A~ + B+D-1C+, B+ and C+ as block matrices relative to the decomposition (M n MX) $ Im(Q3+ Q4) (= ~(+K). We obtain:

$2 V+ ]

A+ = 0 (Q3+ Q4)A(Q3+ Q4)' B+ =

C+ = (CQ2 C(Q3+ Q4)+F2Q3)'

Q2B - G2 ]

(Q3+ Q4)B '

From these identities the formula for W+ is clear. Following the line of arguments of the proof of Theorem 1.3.1 in

[2) one sees (modulo some changes involving the operator D) that

(5.1) D(A)Y =

(A-i)Wj A+i Yj

Y

for Y = Y j , j = 1, ...• S;

for Y EO Yo;

( A_i,\(lj X+T) Zj for Y = Zj' j=I, ... ,t.

It follows that E-1D(A)E has the desired diagonal form. 0

II. SYMMETRIC WIENER-HOPF FACTORIZATION

In the first two sections of this chapter it is shown that for a selfadjoint rational matrix function the incoming and outgoing data, the in­coming and outgoing operators and the corresponding feedback and output in­jection operators may be chosen in such a way that they are dual to each other in an appropriate sense. On the basis of this duality and Theorem 1.5.1 we prove in the third section the main theorems of this paper.

11.1. Duality between incoming and outgoing operators Throughout this chapter W(A) is an m x m selfadjoint rational matrix

function which does not have poles and zeros on the real line including in­finity, and e = (A,B,C,D;¢n,¢m) is a fixed realization of W which satisfies the following general conditions: D is selfadjoint and invertible, the opera­tors A and AX (:= A-BD-1e) have no real eigenvalues, and for some invertible selfadjoint n x n matrix H we have

(1.1) HA = A*H, HB = C*.

Such a realization always exists (see [5,8)); in fact, if the state space

Page 407: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 399

dimension of e (i.e., the number n) is minimal among all realizations of W, then the realization e satifies the above general conditions automatically.

Note that (1.1) implies that HAx = (Ax)*H. Let P (resp. pX) be the spectral projection of A (resp. AX) corresponding to the eigenvalues in the upper half plane. Then

HP = (I - P)*H, Hpx = (I _ pX)*H.

Put M = ImP and MX = Im(I-Px). It follows that Ht,l = M.l and HMx (Mx).l. As an easy consequence we have the following identity:

(1. 2) X J x [HU1nM)]·=M+M.

Note that (1.2) implies that dim (M n MX) = codim (M+Mx ).

Recall that the incoming and outgoing spaces for e are the spaces x j-l Hj = M+M +ImB+ ... +ImA B, j=0,1,2, ...

X j-l Kj = M n M n KerC n ... n KerCA , j=0,1,2, ...

respectively. Here HO = M + MX, KO = M n MX. Formula (1.2) gives (HHO).l = KO'

Further

Hence

(1. 3) .1 (HH.) =K., j=0,1,2, .... J J

According to [2], Corollary 1.10.2 the incoming indices and outgoing indices are determined by:

s = dim H1/HO' t = dim KO/K1,

Wj #{k I dim Hk/H k_1 2 s+l-j},

uj #{k I dim Kk_1/Kk 2 t+l-j}.

By combinin~ this with (1.3) we get the followin~ ~roposition. PROPOSITION 1.1. The inQuming indiQC6 w1"",ws and the outgoing

indiQeJ., u1"",ut Qoinude, that -0." s = t and Wj = uj , j=I, ... ,s. Let K be any complement of M+~( in ~n. Then dimK = dim(M n MX)

(see (1.2)). Further

(1.4) [x,y] = <Hx,y>, x E M n MX, y E K

defines a Hilbert duality between M n MX and K (because of the invertibility of H). We shall use this duality to define incoming data in terms of outgoing

Page 408: Constructive Methods of Wiener-Hopf Factorization

400 Kaashoek and Ran

data. x, n CXj s

LEMMA 1. 2. Let K be. any c.ompieme.n;t on M + M .-tn ¢ . Let {d 'k}k-1 '-I cx. s J -,J-and {e' k}k}l '-I be. outgoing b~~ nO~ A-i and A+i, ~~pe.c.tive.iy, and aobume. J - ,J-thaX. the. two b~~ Me. ~e.iaX.e.d by

(1. 5) k-1 (k-1) ,v d'k = L (-21) e. k . J v=O v J, -v

the.n {g'k}kCXj1 '=1 ib an inc.oming b~ib nO~ A+i and J - ,J-

(1.8) cHgjk,dpq> = 8jp . 8k,cxp-q+1'

PROOF. First note that because of the duality x

(1.4) between M n M

and K there always exists a set of vectors in K satisfying (1.6). Further,

dim(M n MX) = dim K implies that the vectors fjk form a basis for K. Also, for

j=l, ... ,s the vector fj1 E (HK1 )1. = HI' SincedimH1/HO = s, it follows that

the vectors f 11 , ... ,fs1 form a basis for HI modulo HO' s 'lv-I

For x = LV=l L~=l Sv~ev~ E K1 we have

cH((A-i)fjk -fj ,k+1)'x> = cHfjk,(A+i)x> - cHfj ,k+1'x> =

s cxv-1 { } = L L S cHf·k,e +1> - cHf. k+1,e > . v=1 ~=1 v~ J v,~ J, v~

From (1.6) it follows that the term between braces is zero, so

(A-i)fjk -fj ,k+1 E (HK1 )1. = HI = M+Mx+ImB. So {fjk I k = 1, ... ,cxj ,

j = 1, ... ,s} is an incoming basis for A-i.

Let the vectors gjk be given by (1.7). From Proposition I.2.1 we

know that they form an incoming basis for A+ i. It remains to prove formula

(1.8). From (1.5) and (1.7) we obtain

k-l q-1 (i+cxr k)(q-1) . i+v i cHg·k,d q> = L L . (21) (-1) cHfJ. k-i,e _ >, J p i=O v=o iV, P , q v

which is zero if P F j. So, let p = j. Then, using (1.6) we have

Page 409: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 401

The Kronecker delta appearing here is non-zero precisely when t+v = k - a j + q - 1.

Since l+v ~ 0 this implies q-1 ~ aj-k. Put v = k-aj+q-l-l in the for­

mula above. Then 0 s v s q-l gives 0 s t s k-aj+q-l. So

k-aj+q-1 (l+aj-k)( q-1 ) . k-aj+q-1 l <Hg·k,d. > = L f) k + 1 f) (21) (-1) = J Jq l=O -c -aj q- --c

= (q-1 )(2i )k+q-ar 1 kiaj +q- 1 (q-1-~j+k)( _l)l = 8 \aj-k, l=O -c k,aj-q+1'

This proves formula (1.8). D So far the complement K of t1 + MX was arbitrary. We shall now prove

that K may be chosen in a special way. Take subspaces Nand NX such that x x x x

M = N $ (M n M ) and M = N $ (M n M ).

LEt1MA 1.3. ThVte ew.t6 a c.omplement K 06 t1+Mx ,wc.h that K.u.,

H-neutltai and

(1.9) [H(N $ NX)]l K $ (M n MX).

PROOF. Put L = [H(N $ NX)]l. It is easily seen that M n MX = x 1 x x

L n [H(MnM)] = L n (M+H). Let xl'""x k be a basis of 11 n M. Choose

vectors yi, ... ,Yk in L such that

(1.10) <Hy ~ ,x . > = 8 .. , i ,j = 1, ... , k. J 1 1 J

This is possible, since ¢n equipped with the indefinite inner product given

by H is

(1.11)

and 1 et

So K is

a nondegenerate space. Put k

y. = y'. - ~ L <Hy '. ,y , > X, j = 1 , ... , k , J J v= 1 J \! \!

K = span {Y1"" 'YkL Using (1.10) and (1.11) it is easily seen that

<Hy j ,y i > = 0, i , j = 1 , ... , n .

H-neutra1. Note that also K c L. Finally, K n (M n MX) = (0), since

<Hy.,x.> = 8 .. , i,j=l, ... ,k, J 1 1 J

and dimL = codim(N+N x ) = dim(M n MX) + codim(rA+t() = 2'dim(M n r().

So L = K $ (r1 n r{). 0

An H-neutra1 complement K of ~1+r( satisfying (1.9) will be called a

canonical complement, and in that case the decomposition

(1.12 ) ¢n x x = N $ (M n M ) $ K $ N

is said to be a canonical decomposition for 6.

Page 410: Constructive Methods of Wiener-Hopf Factorization

402 Kaashoek and Ran

In what follows Qi (i =1,2,3,4) is the projection onto the i-th space in the decomposition (1.12) along the other spaces in this decomposition.

(1.13 )

(1.14 )

LEMf·1A 1.4. 16 (1.12) ,u., a c.anonic.al de.c.ompo1.laA..on, then

HQ 2 = Q;H, HQI = Q;H.

PROOF. First note that in a canonical decomposition

(HK)~ = N $ K e NX.

Using this identity together with (1.2) and (HM)~ = M, (HMx)~ = MX it is straightforward to derive (1.13). 0

PROPOSITION 1. 5. Let K be. a c.alwnic.al c.omple.me.n:t 06 ~1 + MX. Let CL' s CL· s

{d·k}k}I '-1 and {e'k}k}I '-1 be outgoing ba1.le.1.l 60n A-i and A+i, ne.1.lpe.c.:tive.1'.y, J - ,J- J - ,J-and a1.l1.lume. that the. :two ba1.le.1.l ane ne.1'.a:te.d by (1. 5). Choo1.le. in K inc.oming ba1.le.1.l

CL· S CL· s {f'k}k-JI '-1 and {g'k}k-JI '-1 60n A-i and A+i, ne.1.lpe.c.:tive.1'.y, 1.luc.h that (1.6) J - ,J- J - ,J-and (1. 7) ane. 1.la:t,(J., Med. Let SI,S2 be. the. c.oMe.1.lponding outgoing OpeJl.atoM and TI ,T2 the. c.oMe.1.lponding inc.oming Ope.na:tOM. The.n

* * (1.15) H(Q3TIQ3) = (Q2S2Q2) H, H(Q3T2Q3) = (Q2SIQ2) H.

PROOF. First note that Lemma 1.2 guarantees the existence of the incoming bases {fjk} and {gjk}. We shall only prove the first identity in (1.15); the second is proved analogously. Let fjk and epq be given. Then

<H(T 1 - i )fjk,epq> = <Hfj ,k+1,epq> = Ojp' 0k+1,CXp-q+1 (k < CL j ).

Also

<Hf,k,(S2+ i )e > = <Hf·k,e +1> = O .. Ok (q < CLp)' J pq J p,q JP ,CLp-q So these two expressions are equal for k < CLj and q < CLp' Suppose k = CL j ,

then <H(TI - i)fjk,epq> = 0, and using (1.6) one sees that

<Hf. ,(S2+i)e > = <Hf. ,e +1> = o. JCLj pq JCLj p,q A similar argument holds in case q = CLp. So

<H(TI - i)x,y> = <Hx,(S2+ i)y>

for all x E K and y E M n MX. But then <HTIQ3x,Q2Y> = <HQ3x,S2Q2Y> for all x and y in ¢n, and we can use (1.13) to get the first equality in (1.15). 0

11.2. The basis in ¢m and duality between the feedback operators and the output injection operators

Let 8 = (A,B,C,D;¢n,¢m) be a realization of W which satisfies the

Page 411: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 403

general conditions stated in the first paragraph of the previous section. We

proceed with an analysis of the basis in ¢m. LEMMA 2.1. Let K be al1Y complemel1t 06 M + ~( -1..11 ¢n, al1d Mit

k = 1, ... ,aJ. al1d j = 1, ... , s let the veetoM d j k' e j k' f j k al1d 9 j Ie be lL6 -1..11 -1 -I

Lemma 1.2. FOft j = 1, ... ,s let Zj be 9-1..Vel1 by Zj = 0 cejaj = 0 Cdjaj" 16 a

I.>U 06 veetoM {Yl"" 'Ys} 1.>ati!.>6-!..eI.>

(2.1) <Ozk'Yj> = 0jk' j,k=l, ... ,s,

thel1

(2.2) x

f j 1 - By j = g j 1 - By j E M + M, j = 1, ... , s,

al1d, COI1VeMuy, -1..6 Yl""'Ys I.>ati!.>ny (2.2), thel1 (2.1) hold!.>. FutL:thelt, -1 x . <Oz.,z> = 0 60ft aU Z E B (M+M) al1d J=l, ... ,s.

J -1 x -1 x MOfteovelt, -1..6 Yo A..I.> a Mbl.>pace 06 B (M + M ) Mch that B (M + M )

YO $ span{zl"" ,zs}' thel1 a I.>U 06 vectoM {Y1"" 'Ys} cal1 be chol.>el1 I.>uch

that (2.1) hold!.> al1d

(2.3 ) 1 (Ospan{Y1""'YS }) = span {Yl""'Ys} $ YO'

Then

PROOF. Suppose (2.1) holds. Let

s ai x = I I B.ke' k E M n MX.

i =1 k=l ' ,

s ai <Hx,fJ·1 - ByJ.> = BJ·a . - <B*HX,yJ.> = BJ·a . - I I B·k<Ce,·k'YJ·> =

J J i=l k=l 1 s

Bjaj - i~l Bia;<Ceiai 'Yj> = Bjaj S I Bia,'<OZ;,yJ.> = O.

i=l

So f' l - By. E (H (M n M )) 1 = M + MX • J J

Conversely, suppose (2.2) holds. Then

o = <H(fj1 - BYj) ,ekak> = 0jk - <Yj ,Cekak> = 0jk - <Yj ,Ozk>'

So (2. 1) ho 1 d s . Let Zj and Z E

<eja. ,HBz> = <Heja. ,Bz>. J J

this equals zero.

B-1(M+t() be given. Then <Oz.,z> = <Ceja.'z> = x J x J x 1

Since Bz E M+M and eja. EM n M = (H(M+M )) , J

To prove the last part of the proposition, put L = (DYO)l and apply

to L a reasoning analogous to the one used in the proof of Lemma 1.3 (with H replaced by 0 and N $ NX by YO)' Then the construction of the vectors

Yl"" 'Ys proceeds as in the proof of Lemma 1.3. We omit the details. 0

Page 412: Constructive Methods of Wiener-Hopf Factorization

404 Kaashoek and Ran

Suppose {Yl""'Ys} are constructed such that (2.1) and (2.3) hold. It follows from Lemma 2.1 that for any basis {z +1""'z -s} in YO the matrix

m-s s m «OZj,zk»j,k=s+1 has the same signature as O. Choose a basis {zs+I, ... ,zm-s} in YO such that this matrix has the form Ip ED (-Iq), where p-q = signO and p+q+2s = m.

The different numbers among o.1, ... ,o.s will be denoted by K1, ... ,Kr , where we assume Kl < ... < Kr . The number of times Kj appears in o.1, ... ,o.s will be denoted by mj . Note that s = Lj=1 mj .

Now we introduce vectors zm-s+l, ... ,zm via a renumbering of the vectors Yl""'Ys as follows

zm+i-(ml+.··+mj+l) = Yi+(ml+ ... +mj)' for i = 1, ... ,mj+1 and j = 0, ... ,r-1.

Let S be the matrix S = [zl···zm]' Note that S is invertible, and that from the choice of zl, ... ,zm it follows that

(2.4) S*OS =

1m . r

Ip 0

o -Iq

. Iml

The matrix S will be called a congruence matrix of the realization e. PROPOSITION 2.2. Let ¢n = N ED (M n MX) ED K ED NX be a eanonieal de­

eompo.6ilion, and let G1 ,G2 : ¢m .... M n MX be a pabt 06 output -injeiliol1 opeJULtOIt6

(1te1.a.Uve :to :the pltojeilion Q2 and) eoltlte.6pond-ing :to :the 60liow.tng .6et 06 out­

go-ing data

(l·S (l'S S (2.5) {djk}k;l,j=I' {ejk}k=J1,j=I' {Zj}j=I'

. a.. S (l. S Choo.6e -in K -ineom-ing bd.6~ {f' k}k}1 '-1 and {g'k}k}1 '-1 60ft A-i and A+i, fte.6-J - ,J- J - ,J-peilive-ty, .6ueh :that (1.6) and (1.7) alte .6a:t-i.66-ied. COYl.6:tJtue:t vee:tolt6

Y1""'Ys -in ¢m .6ueh :that (2.1) and (2.3) hold. Then a.. S a.. S s

(2.6) {fjk}k;l,j=l' {gjk\;l,j=l' {Yj}j=l

v., a .6et 06 -ineom-ing data, and :the opeJtatOIt6 F1,F2 : K .... ~m de6-ined by

Page 413: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 405

(2.7) F1Q3 = -(Q2G2)*H, F2Q3 = -(Q2Gl)*H

60rom a p~ 06 6eedbaek op~ato~ eO~e6ponding to the ineoming data (2.6). PROOF. We assume that formulas (1.3.6) up to (1.3.9) hold with

p = Q2' Since (2.2) holds true, it is clear that (2.6) is a set of incoming data. So we have to prove that the operators F1,F2 defined by (2.7) are feed­back operators, that is, we have to check that the formulas (1.2.8) up to (1.2.10) are satisfied. Take x E K and y E M n MX. Then

-1 -1 <H(A-T1 +BD F1)x,y> = <Hx,Ay> - <Hx,S2Y> + <F1x,D Cy> = -1 = <Hx,(A - S2 - G2D C)y> = 0,

-1 x ~ because (A-S2-G2D C)y E KerQ2 = N Ql K Ql N = (HK) (cf. (1.14)). Since x ~ x. -1 x [H(M n M)] = M+M , 1t follows (A-T1 +BD F1)x E M+M for all x E K, and

(1.2.8) is proved. In a similar way one proves (1.2.9). To prove (1.2.10), take Y E ¢m. Then

-1 <DD (C+F1 +F2)fjk,y>=<Hfjk,(Q2B-GI-G2)y>

If y E (Dspan{Yj}j:l)~ = YO Ql span {Yj}j:l' then (Q2B-GI-G2)y = 0 by formula (1.3.8). This implies that D-l(C+F1 +F2)fjk = L~=1 SjkYv' Next, note that (2.1) implies that for l = 1, •.. ,s

I s v -1 Sjk = < v~1 SjkYv,zl> = <DD (C + Fl + F2)fjk ,zl> =

= <Hfjk ,(Q2B - G1 - G2)zl> = <Hfjk ,(S2 - SI)dful ' where the last identity follows from (1.3.9). Now, use (1.3.4) and (1.6), and one sees that

I arl(al) . \1+1 (aj ) . k Sjk = v;O v+l (21) <Hfjk,el,arv> = \ k (21) 6lj · 0

11.3. Proof of the main theorems In this section we prove Theorems 0.1- 0.3. As before W(>..) is a

selfadjoint rational mx m matrix function which is regular on the real line including the point infinity and 8 = (A,B,C,D;¢n,¢m) is a realization of W sati sfyi ng the general conditi ons stated in the fi rstparagraph of Secti on 11.1. To construct a symmetric factorization we need a number of auxiliary operators. First choose a canonical decomposition

(3.1) ¢n = N Ql (M n MX) Ql K Ql NX

for 8, and let Ql' Q2' Q3 and Q4 be the corresponding projections. Next, let

Page 414: Constructive Methods of Wiener-Hopf Factorization

406 Kaashoek and Ran

51,52 : M n r{ -+ M n MX be a pair of outgoing operators and let G1 ,G2 : ¢m -+ M n r{ be a pair of output injection operators associated with a set

(3.2) a.. S

{ejk }k=J1 ,j=l' s

{z .}. 1 J J=

of outgoing data for 8 (and the projection Q2). Further, define operators T1,T1 : K -+ K and F1,F2 : K -+ ¢m by setting

-1, * * Q3T1Q3 = H tQ252Q2) H, F1Q3 = -(Q2G2) H,

-1 * * Q3T2Q3 = H (Q251Q2) H, F2Q3 = -(Q2G1) H,

where H is the invertible selfadjoint operator associated with 8. Finally, extend the vectors zl' ... ,zs to a basis zl, ... ,zm of ¢m such that 5 = [zl, ... ,zm] is a congruence matrix for 8 (as in 5ection 11.2).

THEOREM 3.1. A J.>el6adjo-tvt-t Jta;Uonal m x m maJJU..x 6uncUon W(A) wfUc.h

~ Jte.gulM on :the. Jte,al une. and at -tn6{nily admill a J.>ymm~c. W-te.ne.Jt-HoPn

fiac.:toft-tza;Uon W(A) = W+(I)*I(A)W+(A), -00 $ A $ 00, 06 wfUc.h :the. 6ac.:toJtJ.> Me.

g-tve.n by:

W+(A) = E + ED-1(C(Q3 + Q4) + F2Q3)(A - AflB + -1 -1 -1 -1 -1 + ED CQ2(A - 52) V(A - A) B + ED CQ2(A - 52) (Q2B - G2),

W+(Af l = C 1 - 0-lC(A - AXf1( (Q2 + Q4)B - G2)E-1 +

I(A)

- 0-lC(A - AXf1VX(A - T2f1Q3BE-1 +

-1 -1 -1 -0 (CQ3+F2)(A-T2) Q3BE ,

Ip 0

o -I q

He.Jte. E ~ :the. -tnve.JtJ.>e. 06 :the. c.ongJtue.nc.e. maJJU..x [zl ... zm] me.n:t-tone.d at :the. e.nd

Page 415: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization

06 the 6~t p~ag~aph 06 t~ ¢ection and -1 -1

V = Q2AQ4 -G20 CQ4 - AQ3 - (Q28 -G1 -G2)O FI , x x -1 -1

V = Q4A Q3- Q4BD F2 -1I.-G1D (CQ3+F2)'

wheJr.e A ,u an ~bdJr.~y opeJr.atOIt 6ltom K -tnto M n MX which '-'~Mrv., the Lijapunov equation

(3.3) HlI.x +A*Hx = -HQ2AQ3x + H(Q2B - G1) D-1(CQ3 + F2)x, x E K.

407

FulttheJr., the opeltatoltJ.> A, B, C, D, Q2' Q3' Q4' S2' T2, G1, G2, F1 and F2 ~e

aJ.> -tn :the 6~:t p~ag!taph 06 t~ ,-,eelion and aJ.> U'->ual AX = A - BO-1C. hnail.y, K1, ... ,Kr ~e :the d-U:t.tne:t numbe!tJ.> ~n :the ,-,equence a1 , ... ,as 06 outgo-tng -tn­MCeJ.> -tn (3.2), :the numbe,'t m j equal'-> :the numbeJr. 06 :tmeJ.> K j appea!tJ.> -tn the ,-,equence a1, ... ,as ' the non-negative numbeltJ.> p and q ~e de:teJr.mined by p-q = sign D and p+q = m-2s.

PROOF. We continue to use the notation introduced in the first para­

graph of this section. From Propositions 1.5 and 2.2 it follows that T1,T2 is

a pair of incoming operators and F1,F2 is a pair of feedback operators corres­

ponding to a set of incoming data. So according to Theorem 1.5.1 the function

W admits a right Wiener-Hopf factorization W(\) = ~_(\)rr(\)~+(\) of which the

plus-factor W+(\) is precisely equal to EW+(\). ~Je compute:

W+(X"j* = E*+B*(\-A*)-1((Q;+Q~)C*+(F2Q3)*)O-lE* +

+ B* (\ - A*f1v* (Q2(I - S2f1Q2)*C*D-1E* +

+ (B*Q~ - (Q2G2)* )(Q2 0: - S2 f1Q2)*C*D-1E*

= E*+C(\-A)-1((Q2+ Q1)B-G1)D-1E* +

+ C(\ - Af1(H-1V*H)(\ - T1f1Q3BD-1E* +

-1 -1 * +(CQ3+F1)(\-T1) Q3BD E,

where we have used C* = HB, HA = A*H, HQ2 = Q;H, HQ1 = Q~H and the definitions -1 * of T1, F1 and F2. The next step is to calculate H V H:

-1 * -1 -1 H V H = Q1AQ3 + Q1BD F1+1I.+Q2AQ3-(Q2B-G1)D (CQ3+F2) +

-1 + G2D (CQ3 + F1 + F2)

-1 -1-1 (Q1+Q2)AQ3+1I.-(Q2B-G1-G2)D (CQ3+ F2)+G2D F1+Q1BD F1=

-1 -1 -1 = (Q1 +Q2) AQ3+11.+ (Q2B- G1- G2)D Fl +G2D Fl +Q1BD Fl =

Page 416: Constructive Methods of Wiener-Hopf Factorization

408 Kaashoek and Ran

. -1 -1 = (Q1+Q2)AQ3+A+(Q1+Q2)BD F1 -G1D Fl'

Here we used the definition of A and the fact that -1 -1 * (Q2B - G1 - G2)D (CQ3 + F1 + F2) = o. So H V H is equal to the operator Y-

- * * -1 ~ introduced in Theorem 1.5.1, and we may conclude that W+(A) (E) 0 = W_(A). But then

W(A) = W+(X)*E(A)W+(A)

with E(A) = (E*)-lDD(A)E-1. From (2.4) and (1.5.1) it follows that E(A) has the desired form. The formula for W+(A)-l is a direct consequence of Theorem 1.5.1. D

Note that the Lijapunov equation (3.3) has -1

A=-~Q2AQ3+HQ2B-G1)D (CQ3+ F2)

as one of its solutions. By inserting this solution A in the expressions for V and VX one·obtains the formulas of Theorem 0.3. Theorems 0.1 and 0.2 are immediate corollaries of Theorem 0.3.

We conclude with a remark about the dilation in Theorem 1.4.1. Let 6 = (A,B,C,D;¢n,¢m) be as in the first paragraph of this section, and as in

A A A A A m Theorem 1.4.1. Consider the dilation 6 = (A,B,C,D;X,¢ ). Assume that the

operators Q1' Q2' Q3' Q4' Sl' S2' T1, T2, G1, G2, Fl and F2 appearing in the definition of the operators A. Band e are as in the beginning of this section.

Further, assume that the operator A in Theorem 1.4.1 (which we can choose freely) is a solution of equation (3.3). Then the dilation e has additional symmetry properties. In fact, on X a selfadjoint operator A can be defined by setting

Xl Y1 x2 Y2 4

<A Xo , Yo > = <Hxo'YO> - L <Hx. 'Y5 .>, x3 Y3

j=l J - J

x4 Y4

and one can show that HA = (A)*A and As = (c)*. Further, for the spaces Xl' X2, X3 and X4 in Section 1.4 the following identities hold true:

A .L (HXI ) = Xl e X2 e X3,

(Ax2).L = Xl e X2 e X4' A .L

(HX3) = Xl e X3 e X4'

Page 417: Constructive Methods of Wiener-Hopf Factorization

Symmetric factorization 409

• .L (HX4) = X2 $ X3 $ X4·

Also the bases in X2 and X3 are related by identities of the type (1.6) and

(1.8) (replacing H by H). It follows that (in the sense of [9], Section V.1) e is a realization with selfadjoint centralized singularities, which one can use to prove Theorem 3.1 directly, instead of employing Theorem 1.5.1.

REFERENCES 1. Bart, H., Gohberg, I. and Kaashoek, M.A.: Minimal factorization of

matrix and operator functions. Operator Theory: Advances and Applications, Vol 1; Birkhauser Verlag, Basel, 1979.

2.t Bart, H., Gohberg, I. and Kaashoek, M.A.: Wiener-Hopf factorization of analytic operator functions and realization. Rapport nr. 231, Wiskundig Seminarium der Vrije Universiteit,Amsterdam, 1983.

3. Bart, H., Gohberg, I. and Kaashoek, M.A.: Wiener-Hopf factorization and realization. In: Mathematical Theory of Networks and Systems (Ed. P. Fuhrmann), Lecture Notes in Control and Information Sciences, Vol 58, Springer Verlag, Berlin etc., 1984.

4. Clancey, K. and Gohberg, I.: Factorization of matrix functions and singular integral operators. Operator Theory: Advances and Appli­cations, Vol 3; Birkhauser Verlag, Basel, 1981.

5. Fuhrmann, P.A.: On symmetric rational transfer functions. Linear Algebra and Applications, 50 (1983), 167-250.

6. Gohberg, I.C. and Krein, M.G.: Systems of integral equations on a half line with kernels depending on the difference of the arguments. Uspehi Mat Nauk 13 (1958) no. 2 (80),3-72 (Russian) = Amer. Math. Soc. Transl. (2) 14 (1960), 217-287.

I. Nikolai~uk, A.~1. and Spitkovskii, I.M.: On the Riemann boundary­value problem with hermitian matrix. Dokl. Akad. Nauk SSSR 221 (1975) No.6. English translation: Soviet Math. Dokl. 16 (1975) No.2., 533-536.

8. Ran, A.C.M.: Minimal factorization of selfadjoint rational matrix functions. Integral Equations and Operator Theory 5 (1982), 850-869.

9. Ran, A.C.M.: Semidefinite invariant subspaces, stability and appli­cations. Ph-D thesis Vrije Universiteit, Amsterdam, 1984.

t The results from [2J which are used in the present paper can also be found in the sixth article in this volume.

Department of Mathematics & Computer Science, Vrije Universiteit Amsterdam, The Netherlands.