29
Philips J. Res. 43,346-374, 1988 R 1189 A SURVEY OF THE SPLIT APPROACH BASED TECHNIQUES IN DIGITAL SIGNAL PROCESSING APPLICATIONS by P. DELSARTE and Y. GENIN Philips Research Laboratory, Brussels, Av. Van Becelaere 2, Box 8, B-1170 Brussels, Belgium Abstract The algebraic treatment of covariance and autocorrelation matrices is omnipresent in digital signal processing applications. In the case of sta- tionary signals, these matrices exhibit a particular structure known in mathematics as the Toeplitz structure. Most algorithms commonly used to process such matrices are based on Szegö's classical theory of orthog- anal polynomials on the unit circle, which is intimately related to the subject of positive definite Toeplitz matrices. Recently, these algorithms have been shown to be redundant from the points of view of computa- tional complexity and memory requirement. It turns out that they can be 'split' in two simpler algorithms, and that either of them needs only to be processed. This paper gives a survey of the new techniques based on the split approach in question, with main emphasis on their applications to some standard digital signal processing problems. Keywords: Levinson algorithm, linear prediction, orthogonal Hessen- berg matrices, Pisarenko model, Schur algorithm, Szegö's orthogonal polynomials, Toeplitz matrices. 1. Introduetion There exists a vast collection of problems in digital signal processing (DSP) that can be solved with the help of least-squares mathematical techniques 1). They include system identification, system modelling, spectral analysis, lin- ear prediction, among others. In most situations, this approach gives rise to structured systems of linear equations, which have to be solved repeatedly since the coefficients are time varying. It is clear that the solution methods have to be as fast and robust as possible. To reduce the algorithmic com- plexity, it is of good praetree to implement dedicated numerical procedures that take advantage of the specific data structure exhibited by the problem' dealt with. I 346 Philips Journalof Research Vol. 43 Nos 3/4 1988

A SURVEY OFTHE SPLITAPPROACH BASED TECHNIQUES INDIGITAL SIGNAL PROCESSING APPLICATIONS€¦ ·  · 2014-01-14A SURVEY OFTHE SPLITAPPROACH BASED TECHNIQUES INDIGITAL SIGNAL PROCESSING

Embed Size (px)

Citation preview

Philips J. Res. 43,346-374, 1988 R 1189

A SURVEY OF THE SPLIT APPROACH BASEDTECHNIQUES IN DIGITAL SIGNAL PROCESSING

APPLICATIONS

by P. DELSARTE and Y. GENINPhilips Research Laboratory, Brussels, Av. Van Becelaere 2, Box 8, B-1170 Brussels, Belgium

AbstractThe algebraic treatment of covariance and autocorrelation matrices isomnipresent in digital signal processing applications. In the case of sta-tionary signals, these matrices exhibit a particular structure known inmathematics as the Toeplitz structure. Most algorithms commonly usedto process such matrices are based on Szegö's classical theory of orthog-anal polynomials on the unit circle, which is intimately related to thesubject of positive definite Toeplitz matrices. Recently, these algorithmshave been shown to be redundant from the points of view of computa-tional complexity and memory requirement. It turns out that they can be'split' in two simpler algorithms, and that either of them needs only tobe processed. This paper gives a survey of the new techniques based onthe split approach in question, with main emphasis on their applicationsto some standard digital signal processing problems.

Keywords: Levinson algorithm, linear prediction, orthogonal Hessen-berg matrices, Pisarenko model, Schur algorithm, Szegö'sorthogonal polynomials, Toeplitz matrices.

1. Introduetion

There exists a vast collection of problems in digital signal processing (DSP)that can be solved with the help of least-squares mathematical techniques 1).They include system identification, system modelling, spectral analysis, lin-ear prediction, among others. In most situations, this approach gives rise tostructured systems of linear equations, which have to be solved repeatedlysince the coefficients are time varying. It is clear that the solution methodshave to be as fast and robust as possible. To reduce the algorithmic com-plexity, it is of good praetree to implement dedicated numerical proceduresthat take advantage of the specific data structure exhibited by the problem'dealt with.

I346 Philips Journalof Research Vol. 43 Nos 3/4 1988

A survey of the split approach based techniques

One of the most frequent algebraic structures met in DSP problems is theToeplitz matrix structure 2-4). This is explained by the fact that a positive def-inite Toeplitz matrix can be interpreted either as the covariance matrix of astationary stochastic process or as the autocorrelation matrix of a sampledsignal record. It is therefore not surprising that a great variety of algorithmshave been proposed in the DSP literature to process Toeplitz matrices orsome other structured matrices related to them. Progress in this field hastaken profit of the extensive research work that has been devoted earlier, inpure and applied mathematics, to various subjects closely connected withpositive definite Toeplitz matrices. The theory of orthogonal polynomials onthe unit circle 5,6) and the theory of certain classes of analytic functions suchas Carathéodory functions and Schur functions 7) are especially worth men-tioning in that respect. It turns out that the mathematical results obtainedin these domains provide a unifying framework for many algorithms cur-rently used in DSP applications.

Quite recently, a new approach to the algebraic treatment of Toeplitzmatrices and related objects has been worked out by several researchers. Thisapproach is by no means independent of the mathematical context men-tioned above; however, it does not immediately fit into the classical theo-ries. From the application viewpoint, the most striking discovery is that sev-eral standard algorithms used in Toeplitz type DSP problems are'computationally redundant'; they can be replaced by alternative methods(with analogous algebraic structures) that are more economical in arithmet-ical operations and in memory space. This paper provides a survey of theclass of algorithms resulting from the new approach in question, termed the'split approach' for some reasons explained below. The authors have endea-voured to give a unifying and self-contained treatment of the subject. Thescope is restricted to the case of one-dimensional real data, although thetheory admits a more general setting.

The standard methods, known as the Levinson, Schur and lattice algo-rithms, commonly used to solve linear prediction problems are briefly re-viewed in sec. 2 for further reference. It is explained how Szegö's theory oforthogonal polynomials on the unit circle provides a suitable mathematicalbackground for these methods, which can be viewed as different implemen-tations of the one-step recurrence formula relating successive polynomialsassociated with a positive definite Toeplitz matrix. The connection is estab-lished by the simple fact that the predietor polynomials processed by theLevinson algorithm are nothing but the reciprocals of Szegö's orthogonalpolynomials. The Schur-Cohn stability criterion, checking whether a givenpolynomial is devoid of zeros inside the unit circle, is recalled to be based

Phllips Journalof Research Vol. 43 Nos 3/4 1988 347

348 Philips Journalof Research Vol. 43 Nos 3/4 1988

P. Delsarte and Y. Genin

on the inverse form of the same recurrence relation. Some useful informa-tion is given concerning the subject of Carathéodory functions and Schurfunctions (or positive functions and bounded functions, respectively). Inparticular, the Schur algorithm and the Schur-Cohn criterion are shown toproceed from a function-theoretic recursive method to verify whether a se-quence of numbers yields the Maclaurin series of a Schur function.The central theme of sec. 3, which is entirely devoted to the novel 'split

approach', is a remarkable two-step recurrence relation for suitably nor-malized versions of the symmetric parts (or of the antisymmetrie parts) ofthe classical predietor polynomials relative to a positive definite Toeplitzmatrix. As a consequence of this result, it is shown how the Levinson al-gorithm can be replaced by a more efficient recursive method that processesonly the 'symmetric predictors' thus defined. The new procedure, called thesplit Levinson algorithm 8), allows one to determine the sequence of reflec-,tion coefficients (or Schur-Szegö parameters) and the classical predietor ofa prescribed length at lower cost than the Levinson algorithm. Similarly, thesplit approach applies to the Schur and lattice methods, which are shown toadmit more economical counterparts, termed the split Schur and split latticealgorithms 9). An intimate connection is brought out between the two-steprecurrence relations underlying the split Levinson algorithm, on the onehand, and the Bistritz polynomial stability criterion 10,11), on the other hand.In fact, these two recurrence relations are essentially the same but they areprocessed in opposite directions. Thus, the Bistritz criterion can be viewedas a parsimonious reformulation of the Schur-Cohn criterion in the samesense as the split Levinson algorithm with respect to the classical Levinsonalgorithm.A significant side result of the split approach is to bring out a one-to-one

correspondence between families of polynomials (with real coefficients) or-thogonal on the unit circle and some well-defined families of polynomials or-thogonal on the real line. In fact, the split Levinson recurrence relation canbe transformed, by a simple change of variable, into the classical three-termformula of orthogonal polynomial theory. It is finally pointed out that thesplit approach can be extended to some important function-theoretic prob-lems 12). In particular, this leads to a new characterization criterion for Car-athéodory functions in terms of the positive definiteness of associated tri-diagonal matrices.

Section 4 contains miscellaneous results about further applications of thesplit approach. The first subject is the computation of the oscillator fre-quencies in Pisarenko's model P), which is often used in DSP. It is ex-plained how this can be reduced to the eigenvalue problem for a well-de-

A survey of the split approach based techniques

fined tridiagonal matrix with zero diagonal"), hence to the singular valueproblem for a bidiagonal matrix of half the original size 14). Next, it is shownthat the computation of the eigenvalues of an orthogonal Hessenberg ma-trix 15) can-be performed by means of the same basic method. Another ap-plication of the same technique is the line spectral pair model!") and a newmethod of formant detection 17) in speech analysis. In fact, all these ques-tions are concerned with the problem of computing the zeros of certain well-defined symmetric or antisymmetrie polynomials produced by the split Lev-inson algorithm (roughly speaking).Then, it is examined how the split methods are applicable to the joint

process estimation problem 18.20), which is a natural generalization of the lin-ear prediction problem often met in DSP. It turns out that the split Levin-son, split Schur and split lattice algorithms can be extended to this problem,with a similar gain in efficiency as in the linear prediction case. Next, somecomments are made and some results are given about the possibility of usingthe split approach for system matrices more general than positive definiteToeplitz matrices. In particular, the Bistritz stability criterion can be ex-tended to a general test method for counting the zeros of a polynomial in-side the unit circle 21.22); the basic recurrence relation remains the same (ex-cept in some singular cases) but the underlying Toeplitz matrix has anarbitrary signature. In a very different application area, the split Levinsonalgorithm can be generalized to certain matrices that are close to the Toe-plitz structure, in a precise sense P). Finally, brief comments are made con-cerning the extension of a subset of the split methods to complex Toeplitzmatrices and to block-Toeplitz matrices. .

2. The classical approachIn DSP applications such as spectral estimation, autoregressive modelling,

linear prediction coding and the like, the problem frequently arises of solv-ing a system of linear equations of the form.

(1)

where C; is a positive definite real matrix of order n + 1 having the Toeplitzstructure, that is,

C =n

~

o Cl

Cl Co

c; Cn-l

(2)

Philips Journalof Research Vol. 43 Nos 3/4 1988 349

P. Delsarte and Y. Genin

The solution vector an = (an,O,an,l"" ,a",,,V is normalized by setting a",o = 1;this uniquely determines the number u" (which is necessarily positive).From C; let us construct the sequence of nested Toeplitz submatrices Ci,

with k =O,1...,n, given by Ck = [ch1: 0::;;:;i,j:~ k] like in (2). For each of them,consider the associated problem (1), i.e.,

(3)

The Levinson algorithm 2,4,24) is commonly used to solve the system (1); itcomputes recursively the solution vector ak= (l,ak,l> ... ,ak,kV of (3), to-gether with the number Uk> for k=O,l, ... ,n. To describe this method it provesconvenient to represent the vector ak by means of the polynomial

k

ak(z) = 2: ak.izi,i=O

(4)

often termed predietor polynomial in the linear prediction context. Note theproperty ak(O) = 1.

The Levinson algorithm is mainly based on the one-step recurrence rela-tion

(5)

with k = 1,2, ... ,n, where ilk-I(Z) = Zk-Iak_l (Z-I) denotes the reciprocal(mirror image) of ak_l(z). The number Pk in (5), called Schur-Szegö para-meter, or reflection coefficient, is determined from ak-l(z) through the scalarproduct relation

k-lUk-l Pk= - 2: Ck-iak-I,i,

i=O(6)

while Uk is updated by means of the formula

(7)

The initialization is given by ao(z) = 1 and Uo= co. As the matrix C; is po-sitive definite, all numbers Uk = det Ck/det Ck-I are positive. This impliesthat the reflection coefficients satisfy the inequality

Ipkl < 1 for k = 1,2, ... ,n. (8)

350 PhllIps Journalof Research Vol.43 Nos 3/4 1988

A survey of the split approach based techniques

In fact, together with Co > 0, this property characterizes exactly the positivedefiniteness of Cn" The computational complexity of the Levinson algorithmsolving the Toeplitz problem (1) is a quadratic function of the order n (in-stead of a cubic function as in the case of a generallinear system solver).

In a number of applications, the sequence of reflection coefficientsPl>P2, ... ,Pn is more significant than the predietor polynomial anCz) itself.When such is the case, an appropriate substitute for the Levinson algorithmis the Schur algorithm 25), which processes the variables u and v defined by

k-l k-l

Uk-l,j = 2: Ck+j-iak-l,i, Vk-I,j = 2: Cj+i ak-I,i,~o ~o

(9)

for j = O,l, ... ,n - k. The relations (3) and (6) yield vk-l,O = (J'k-l and uk-l,O

= - (J'k-l Pk, implying that Pk can be determined as the ratio

Uk-lOPk=---' .

vk-l,O(10)

Furthermore, it follows from (5) that the Schur variables (9) can be updatedby means of the formulas

with j = O,l, ... ,n - k - 1, for k = 1,2, ... ,n - 1. The initial conditions aregiven by UO,j = Cj+l and vOJ == Cj for j = O,l, ... ,n - 1. The Schur algorithmis based on (10) and (11). It has about the same complexity as the Levinsonalgorithm. However, it does not require any scalar product computation; thismakes it suitable for parallel type implementations.

In many practical applications, the entries ei of the Toeplitz matrix Cn oc-cur as the autocorrelation lags

N-l

c,= 2: set) s(t+ i)1=0

(12)

of a given nonzero sequence of signal samples s(O), s(l), ... ,s(N-l), of a fi-nite length N. In such a situation, there exists an interesting method, calledthe lattice algorithm 26), that allows one to determine the reflection coeffi-cients Pk from the signal samples themselves (without computing the auto-correlation lags). It should be pointed out that the signal samples set) are

Phllips Journalof Research Vol. 43 Nos 3/4 1988 351

N+k-2

O"k-lPk= - 2: Ik-l(t)bk-l(t-1),1=1

(15)

P. Delsarte and Y. Genin

assumed to vanish outside the window 0:;;;;r« N - 1.The signals processed by the lattice algorithm, denoted here by I and b

and usually termed forward and backward prediction errors, are defined by

k-l k-lIk-l(t) = 2: ak-l,is(t - i), bk-let) = 2: ak-l,k-l-iS(t-i), (13)

~o ~o

for t = O,l, ... ,N + k - 2. It follows from (5) that they satisfy the recurrencerelations

Furthermore, it is seen that the reflection coefficient Pk can be determinedfrom Ik-l and bk-l with the help of the scalar product formula

which results from (3)., (6) and (13). Similarly, the squared Euclidean normsof both signal sequences Ik-l and bk-1 are seen to be equal to U'k-l' Theserelations constitute the basis of the lattice algorithm, initialized by lo(t) =boet) = set) for t = O,l, ... ,N - 1. Although the lattice algorithm is morecomplex than the Levinson and Schur algorithms, it is sometimes preferredto them in certain applications (e.g. speech analysis) for technical reasonsthat will not be explained in this paper.

The Levinson algorithm is intimately related to the Schur-Cohn crite-rion 27,28) to check whether a polynomial is stable, i.e., devoid of zeros in theclosed unit disc [z] ~ 1. Consider a polynomial allez) of degree n, satisfyingaiO) = 1. Construct the sequence of polynomials ar(z), with decreasing de-gree r, by means of the formula

(16)

for k = n,n - 1, ... ,1, where Pk is the coefficient of Zk in ak(z). The Schur-Cohn criterion states that this construction is actually possible and yields asequence of numbers Pk satisfying Ipkl < 1 for all k if and only if the initialpolynomial all(z) is stable. Let us now observe that, if all(z) is the predietorpolynomial built from the solution of the positive definite Toeplitz system(1), then the Schur-Cohn parameters Pk occurring in (16) coincide with the

352 Phillps Journalof Research Vol. 43 Nos 3/4 1988

A survey of the split approach based techniques

reflection coefficients Pk in the Levinson recurrence (5). Indeed, (5) and (16)are inverse forms of each other. In view of (8), the Schur-Cohn criterion im-plies that the predietor polynomial an(z) is stable.

The recurrence relation (5) underlying the Levinson algorithm has beenoriginally discovered in the framework of Szegö's theory of orthogonal poly-nomials on the unit circle 5.6). To explain the correspondence between bothsubjects let us introduce the positive definite inner product

n Il

< X(Z),y(z) > = 2: 2: Cli-jIXiYj,i=Oj=O

(17)

where x(z) = :2:i'=o x.z' and y(z) = :2:~o yii are any two polynomials of de-gree ~ n. Consider the normalized reciprocals

1

B« (z) = (T~2 tlk(Z) (18)

of the predictor polynomials ak(z). It follows immediately from (3) that theysatisfy the orthogonality relation.

(19)

for °~ k.l ~ n, where {)is the Kronecker symbol.Let us now give a function-theoretic interpretation of this result. It is well

known that the entries Ck of the positive definite Toeplitz matrix en are thefirst n+1 trigonometrie moments (or Fourier coefficients) of a suitable po-sitive measure dw( 8), in the sense that they are expressible by

(20)

for k = O,l, ... ,n. (For a fixed order n, there is an infinite number of choicesfor this measure.) As a consequence, (17) can be viewed as the inner prod-uct associated with dw( 8), that is,

f27T< x(z),y(z) > = 0 x(e-i8)y(é8) dw(8). (21)

Thus, (19) means that the polynomials gk(Z) are orthonormal on the unit cir-cle with respect to the measure dw(8), and the Levinson relation (5) can be

Philips Journalor Research Vo1. 43 Nos 3/4 1988 353

J27T eiB + zfez) = ·B dw(O).

o e' - z (22)

P. Delsarte and Y. Genin

interpreted as the reciprocal version of the recurrence formula satisfied bysuch polynomials.

With the positive measure dw( 0) let us now associate the function fez) de-fined in the unit disc [z] < 1 by means of the Riesz-Herglotz integral rep-resentation

This function is analytic and its real part is nonnegative for all point z of theunit disc. A function enjoying these two properties is referred to as a posi-tive funçtion or as a Carathéodory function 7). The theory of Carthéodoryfunctions is known to have numerous applications in the mathematics of sig-nals, circuits and systems ê"). The Maclaurin expansion of fez), defined in(22), is given by

(23)

where Ck is the trigonometrie moment (20) of dw(O) and, therefore, is the (k+ 1)th element of the first column of C; for k = 0,1, ... ,n. This property al-lows one to consider certain modelling problems in DSP as natural inter-polation problems in the class of Carathéodory functions 30).

From any Carathéodory function fez) let us define the function cp(z) givenby the formula

f(O) - fez)cp(z) = z[f(O) +fez)]" (24)

This is a Schur function (also called a bounded function), which means thatcp(z) is analytic and that its modulus is less than or equal to unity for all pointz of the unit disc. In fact, (24) exhibits a one-to-one correspondence be-tween the classes of Carathéodory and Schur functions 7). Consider the se-quence of functions CPk(Z) generated by the recurrence formula

(25)

with k = 1,2, etc., from a given initial function CPl(Z) = cp(z). A celebratedtheorem, due to Schur, states that cp(z) is a Schur function if and onl~ if the

354 Phllips Journalof Research Vol. 43 Nos 3/4 1988

A survey of the split approach based techniques

values cfJk(O)satisfy the condition IcfJk(O)I ::s;; 1 for all k. In this case, cfJk(Z) isa Schur function for all k.When cfJ(z) is a Schur function originating from a given positive definite

Toeplitz matrix C", via (20), (22) and (24), its first parameters cfJk(O)coin-cide with the reflection coefficients Pk relative to CII; more precisely, we have

. cfJk(O)= Pk for k = 1,2,... ,n. (26)

In fact, the Schur algorithm, based on (10) and (11), can essentially be viewedas an implementation of the first n - 1 steps of Schur's function-theoreticrecurrence formula (25); see for example ref. 29.Let us finally point out that the Schur-Cohn stability criterion can be in-

terpreted as a consequence of Schur's theorem. To see this, it suffices to notethat, in view of the maximum modulus principle, the rational function cfJl(Z)= all(z)/all(z) is a Schur function of degree n if and only if the given poly-nomial aiz) is stable. Indeed, for such an initial function cfJ(z), the formula(25) produces the sequence of functions

(27)

with k = 1,2, ... ,n, where the polynomials ar(z) result from all(z) through theSchur-Cohn relation (16).

3. The split approachAny polynomial x(z) can be uniquely decomposed as the sum, x(z) =

x+(z)+x-(z), of its symmetric part, x+(z), and of its antisymmetrie part,x-ex), characterized by x+(z) = x+(z) and x-ez) = -X-(z). Of course, wesimply have 2x+(z) = x(z)+x(z) and 2x-(z) = x(z), - x(z). Roughly speak-ing, the split approach to linear prediction problems is based on the fact thatsolving the Toeplitz system (1) does not require to determine the full pre-dietor polynomials ak(z), for 1 ::s;; k ::s;; n, as in the Levinson algorithm. It isactually possible and sufficient to recursively compute either their symmetricparts at(z) or their antisymmetrie parts a;;(z). In fact, the desired predietoraiz) and its reflection coefficients Pl>P2,'" .p; can be obtained as byproductsof a computation scheme of that type.Let us now examine the symmetric case in some detail. (The other case

can be' treated similarly.) Combining the Levinson relation (5) with its re-ciprocal version we obtain the remarkable identity

Philips Journalof Research Vol. 43 Nos 3/4 1988 355

(28)

P. Delsarte and Y. Genin

For k = 1,2, ... .n.n + 1, define the symmetric polynomial

(29)

, with some nonzero real number Pk,O= Pk(O). It follows from (28) that Pk(Z)is proportional to at(z), in case k "'" n. It proves useful to introduce thenumbers

(30)

termed Jacobi parameters for a reason explained later. Setting z = 1 in (28)and (29) yields the identity

(31)

Next, by eliminating £Îk(z) between Pk+l(Z) and Pk(z), using (28) and (31),we obtain the relation

(32)

As a result, aiz) and Pk can be retrieved from Pk(z) and Pk+l(Z), with thehelp of (30), (31) and (32).Appropriate choices for the normalizing factors Pk.O lead to particularly

simple recursive methods for computing the symmetric polynomials Pk(Z).To explain this, let us first note that the coefficient vector Pk =(Pk,;: 0 "'"i "'"k)T of the polynomial Pk(Z) = 2,7=0 Pk,;Z; satisfies the system oflinear equations

(33)

with Tk = Pk,OU'k-l(l - Pk)' This follows from (3), (28) and (29), in view ofthe fact that Ck is centrosymmetric. As a consequence, we can write anidentity of the form

356 Philips Journalof Research Vol. 43 Nos 3/4 1988

A survey of the split approach based techniques

(34)

with obvious notation such as ZPk = (0, Pk,O"'" Pk,kV, Note that (34) re-mains valid in the 'final case' k = n, for a suitable choice of Cn+l> and hasto be modified in the 'initial case' k = 1.

From the polynomials Pk-l(Z) and Pk(Z) define the real numbers Tk-l andTk by means of (33), and set their ratio

(35)

Consider then the symmetric polynomial Pk+l(Z) given by

(36)

It is immediately seen by inspection of (34) that the coefficient vector Pk+lof (36) obeys a linear system (33), for a well-defined number Tk+l> if so doPk-l and Pk' Therefore, Pk+l(Z) has the desired form (29), with k replacedby k + 1; the appropriate factor Pk+I.O equals (XkPk,O in view of (36). As aconsequence, an admissible sequence of symmetric polynomials Pk(Z) withk = O,l, ... ,n,n + 1 can be obtained by iterating the construction processabove, based on (35) and (36), for an arbitrary choice of the' initial poly-nomials Po(z) = a and Pl(Z) = b(l + z). Here we make the choice Po(z) =1,PI(Z) = 1 + z. Note that (Xl can be determined by use of (35) if we set TO= crJ2. Thus, the present normalization is seen to be characterized by thefact that the product Tk-lPk,O is independent of k. More precisely, we have

Tk-IPk,O = fco for k = 1,2, ... , n + 1. (37)

The split Levinson algorithm 8,9) determines the set of polynomials Pk(Z)by use of the two-step recurrence relation (36), where the parameter (Xk iscomputed as the ratio (35) of the numbers Tk-l and Tk given by the scalarproducts

Phlllps Journni of Research Vol. 43 Nos 3/4 1988 357

Ak> 0 for k = 2,3, ... , n + 1. (42)

P. Delsarte and Y. Genin

k

Tk = L CiPk,i,i=O

(38)

in agreement with (33). The reflection coefficients can be obtained along theway through the formula

(39)

with the convention Po = 1 and ao = 1. The proof of (39) is based on theidentity Tk = Pk,OO"k-l(l - Pk); details are omitted. If required, the predietorpolynomial an(z) can be computed from PII(z) and Pn+I(Z), in a terminal step,by use of (32) with k = n; the parameter An+l is available either directly from(30) or from the recurrence relation (40) below. It should be pointed outthat the normalization adopted here for the split Levinson algorithm is dif-ferent from that of the original version 8).It is seen that the split Levinson algorithm is more economical in arith-

metical operations and in memory space than the classical Levinson method(5), (6), (7). This is due to the symmetry property of the polynomials Pk(Z),that is, Pk,i = Pk,k-i, implying that only half of the coefficients have actuallyto be computed in (36) and that the scalar product (38) can be written asthe sum of terms (c, + Ck-i) Pk,i with i .;:;(k - 1)/2 plus the term c.p, when k= 2r.It is interesting to examine how the constraint (8) on the Schur-Szegö pa-

rameters P« translates in terms of the coefficients ak involved in the splitLevinson algorithm. Using (30) and (36) shows that the Jacobi parametersAk satisfy the recurrence relation

(40)

for k = 1,2,... ,n, with Al = 2 (i.e., Ao = 00). When combined with the con-sequence Ak+l = ak(1 + Pk) of (28), this produces the identity

(41)

As a result, the boundedness property Ipkl < 1 for k = 1,2,... ,n is equivalentto the positivity property

358 Phllips Journalof Research Vol. 43 Nos 3/4 1988

Philips Journalof Research Vol. 43 Nos 3/4 1988 359

A survey of the split approach based techniques

In view of (40), the criterion (42) amounts to the positive definiteness ofa tridiagonal matrix, or Jacobi matrix, constructed from the numbers ak' Letus now briefly consider the environment of this question 12,31). For k =O,l, ... ,n, define the tridiagonal matrix polynomial

1 a,(l+z)'·.h(z) = (43)

z

ao(l+z) z

with ao = 1 (i.e., P-l(Z) = 0). In view of (36), the Laplace expansion rulefor determinants yields the formula

(44)

It follows from (30) and (44) that the positivity constraint (42) exactly ex-presses the fact that the symmetric Jacobi matrix Jn(1) is positive definite.Equivalently, under the assumption Co > 0, the Toeplitz matrix C; and theJacobi matrix Jn(l) are positive definite simultaneously. (Let us mention thata congruence relation between C; and Jn(l) can be exhibited explicitly 12).)This property is used in the final part of the present section, where a novelcharacterization of Carathéodory functions is derived from the split ap-proach.

As one may expect, there exist split versions of the Schur algorithm (10),(11) and of the lattice algorithm (14), (15), which can be deduced from thesplit Levinson algorithm 9). In fact, two variants exist for each of them, de-pending on the choice between the symmetric parts and the antisymmetrieparts of the predictor polynomials ak(z), As before, we consider only the de-tails of the symmetric case.To begin with, let us combine the Schur variables (9) in agreement with

(29) so as to define the appropriate 'split Schur variables'

k

Wk,j = Pk,O (Uk-I,; + Ok-Ij) = 2: ci+j Pk,;,;=0

(45)

with 0 ::s;; j ::s;; n - k. Translating the split Levinson recurrence relation (36)in terms of the variables (45) produces the formula

Tk = Wk,O for k = 1,2, ... ,n. (47)

P. Delsarte and Y. Genin

(46)

for j = O,l, ... ,n - k - 1, and k = 1,2, ... ,n. The split Schur algorithm9) isbased on the recurrence relation (46), with the initial conditions WO,j = Cj

and Wlj'= Cj + Cj+l' In this method, the coefficient ak is determined by ak

= Tk-/Tb as in (35), where Tk is now given by

The identity (47) follows directly from (45), in view of (38). Recall that theinitial value is TO= co/2. The main output of the algorithm is the sequenceof reflection coefficients Pk; it is obtained from the recurrence relation (39),with Po = ao = 1. The split Schur algorithm exhibits about the same com-plexity and memory reduction as the split Levinson algorithm. It is inter-esting to note that the former method involves no scalar product computa-tion, in contrast with the latter.The companion split lattice algorithm can be deduced from its classical

counterpart (14), (15) by a mere duplication of the argument above in termsof the signal variables (13). It is clear that the appropriate 'split lattice var-iables', denoted here by Xk(t), have to be defined from the classical latticevariables as

k

Xk(t) = Pk,o(fk-l(t) + bk-let - 1» = 2: Pk,;s(t-i). (48);=0

After translation in terms of the variables (48), the split Levinson recurr-ence (36) assumes the form

(49)

for t = O,l, ... ,N - k, and k = 1,2, ... ,n. The split lattice algorithm9,32) is basedon the recurrence relation (49), with the initial conditions xo(t) = set) andXl(t) = set) + set - 1). (A closely related method has been first discoveredby Cybenko 33).) The coefficient ak occurring in (49) can be determined withthe help of (35), at least in principle. By use of (12), (33) and (48) we obtain2Pk,OTk = IlxkW, with

N+k-lIlxkW = .L x~(t).

1=0(50)

360 . Phlllps Journalof Research Vol. 43 Nos 3/4 1988

(51)

A survey of the split approach based techniques

In view of (37), this shows that the parameter CXkis obtainable by means ofthe formula

The resulting algorithm allows one to compute the sequence of reflectioncoefficients Pk by means of the same relation (39) as in the companion meth-ods. When measured in terms of the number of required multiplications, thecomplexity of the split lattice algorithm is approximately two thirds that ofthe classical lattice algorithm.It has been pointed in sec. 2 that the Schur-Cohn polynomial stability cri-

terion can be viewed as an implementation of the inverse form of the one-step Szegö-Levinson recurrence relation (5). It turns out that the two-steprecurrence formula (36) underlying the split Levinson algorithm can be usedin a similar manner so as to produce an alternative stability criterion, whichis more economical than the Schur-Cohn method. This new criterion is es-sentially due to Bistritz 10,11,34). Let us now briefly explain the machinery.From a given polynomial all(z) of degree n, with all(O) = 1, construct bothsymmetric polynomials

PII+l(Z) = all(z) + zall(z), PII(Z) = all(z) + all(z). (52)

(We shall use the same notations as in the beginning of this section, in spiteof the fact that the initialization differs from that of the split Levinson al-gorithm.) Consider the descending form of the recurrence relation (36), thatis,

(53)

for k = n,n - 1, ... ,1, where CXkis determined from the obvious divisibilitycondition in (53), yielding

(54)

The Bistritz stability criterion states that the given polynomial all(z) is sta-ble (i.e., devoid of zeros in [z] .;;; 1) if and only if the Jacobi parameters Àb

given directly by (30) or recursively by (40), are positive for k = 1,2, ... ,n.

Philips Journalof Research Vol. 43 Nos 3/4 1988 361

P. Delsarte and Y. Genin

This property can be viewed as a consequence of (42). More explicitly, itcan be deduced from the remarkable identities

(55)

for k = 1,2, ... .n, -together with ÀIl+l = 1. The result (55) shows the exactequivalence between the Bistritz criterion (Àk > 0) and the Schur-Cohn cri-terion (Ipkl < 1). It follows from the fact that Pk(Z) is expressed in terms ofthe symmetric part of aiz) by

The proof of (56) is obtained by straightforward verification from (5) and(53). Details are omitted.Let us now turn our attention to a very different subject. The split ap-

proach to Toeplitz matrices reveals an interesting one-to-one correspond-ence between families of (real) polynomials gk(Z) orthogonal on the unit cir-cle and families of even/odd polynomials 1Tk(X) orthogonal on the interval[-1,1] of the realline 8.35). This correspondence involves the change of var-iable Z ~ x formally given by

I I

X = t(Z2 + Z -"2), (57)

i.e., x = cos 8/2 for Z = e;IJ. When applied to the symmetric part of the Szegöpolynomial gk(Z) in (18), this produces the desired polynomial 1Tk(X), roughlyspeaking. The precise definition is

(58)

with Pk(Z) as in (29). In view of the symmetry property Pk(Z) = A(z), it isclear that 1Tk(X) actually is a polynomial of degree k in the variable x. Fur-thermore, it enjoys the even/odd property, that is,

(59)

(In other words, 1Tk(X) involves only even powers of x when k is even andodd powers of x when k is odd.)

362 Phllips Journol of Research Vol. 43 Nos 3/4 1988

Philips Journalof Research Vol. 43 Nos 3/4 1988 363

A survey of the split approach based techniques

It can be easily proved by use of (19) that the polynomials 7Tk(X) are or-thogonal on the interval [-l,l].with respect to an appropriate positivemeasure 6.8). Rather than deriving this result, let us now mention the two-step recurrence relation satisfied by these polynomials, namely,

(60)

In fact, (60) is nothing but a rewriting of the split Levinson formula (36) byuse of (57) and (58). It is interesting to note that the main properties of thepolynomials 7Tk(X) follow directly from the recurrence relation (60) and thepositivity property (42) of the associated Jacobi parameters Àh given by

(61)

First, Favard's theorem implies that the 7Tk(X) constitute a family of ortho-gonal polynomials, for a suitable positive measure, as a consequence of Uk

> 0 only. Next, it follows from (42) and (61) that the measure has to be sup-ported on the interval [-1,1]. Finally, the zeros of 7Tk(X) are all distinct, be-long to the open interval (-1,1), and separate those of 7Tk_I(X), This exactlymeans that the zeros of Pk(Z) are distinct, located on the unit circle Z = ei9with () =1= 0, and separate those of Pk-I(Z),

Of course, the split approach can be extended to the case of an infiniteToeplitz matrix C"" under the assumption that all finite Toeplitz sections C;are nonnegative definite. This provides new mathematical tools to investi-gate the class of Carathéodory functions (see end of sec. 2) and, in partic-ular, to solve certain important interpolation problems in this class 12). Letus briefly sketch the main ideas behind the new theory in question. To startwith, consider the sequence of rational functions

fk(z) = Pn+l-k(Z) ,(1-z)Pn_iz)

(62)

with k = O,l, ... ,n, defined from the symmetric polynomials Po(z),Pl(Z), ... ,Pn+l(Z) given as above. It can be easily shown that fk(z) is a Car-athéodory function of the lossless type, in the sense that the real part of fk( ei 9)vanishes almost everywhere. It follows from (36) that the functions fk(z) sat-isfy the recurrence relation

P. Delsarte and Y. Genin

(63)

with ex" = exn-k = A(O). Note that the Jacobi parameter Àn+1-k can be de-fined as the limit of (1 - z)fk(z) when z tends to 1. The fact that fo(z) is aCarathéodory function (of lossless type) is expressed by the positivity prop-erty Àk > ° for all k or, equivalently, by the positive definiteness of the Ja-cobi matrix Jn(1) given by (43), with n = k and z = 1.

More generally, the relation (63) can be applied to arbitrary Carathéo-dory functions fk(z), lossless or not, rational or not. It can be shown that, ifthe initial functionfo(z) is a Carathéodory function, then so are all functionsfk(z) generated by the recurrence (63), with ex" = fk(O) for k = 0,1,2, etc.The method assumes a simpler form if it is expressed in terms of the func-tions Wk(Z) defined via the identity

(64)

with w_1(z) = 1 - zand wo(z) = fez) a given Carathéodory function. In-deed, (63) simply yields

(65)

with ex" = Wk-l(O)/Wk(O), for k = 0,1,2, etc. It turns out that fez) is a Car-athéodory function if and only if the Jacobi matrix h(l) built from the pa-rameters ex; generated by this method is nonnegative definite for all k. Letus finally observe that the recurrence relation (46) of the split Schur algo-rithm can be viewed as an implementation of the formula (65). Details onthis subject can be found in refs 12 and 31.

4. Further applications and extensionsAs a side result, the split approach to Toeplitz systems yields an efficient

method to compute the Pisarenko frequencies that occur in a particular mo-delling technique widely used in DSP applications 8,36). It is known that anonnegative definite Toeplitz matrix Cn can be modelled as the covariancematrix of a stationary stochastic process resulting from the parallel connec-tion of a white noise source and of at most n sinusoidal waves 13). In the'nondegenerate case', there are exactly 11 oscillators; their frequenciesWbW2, ... ,Wn are defined by the zeros z, = exp(iw() of the polynomial Pn(z)

364 Philips Journalof Research Vol. 43 Nos 3/4 1988

Phllips Journul of Research Vol. 43 Nos 3/4 1988 365

A survey of the split approach based techniques

built from the eigenvector , Pm of the matrix C; corresponding to the small-est eigenvalue, Ào. The 'eigenpolynomial' pnCz) is proportional either toan-1(z) + zan-1(z) or to an-1(z) - zan-1(z), where an-1(z) denotes the pre-dietor polynomial relative to the positive definite Toeplitz matrix Cn-1 - Ào!.

Let us assume we are in the first case, i.e., the symmetric case Pn(z) =Pn(z), so that the results above can be applied as such. For convenience, weshall only consider the subcase where 11 is even, which implies that the zerosof Pn(z) occur in conjugate pairs {zt,it}, with t = 1,2, ... ,n12. To each of them,there corresponds a symmetric pair of zeros {xt,-Xt} of the image polynom-ial 7TnCX) of Pn(z), defined as in (58), with x, = cos wt/2. As a result, the Pi-sarenko frequencies Wt can be directly obtained by computing the zeros x, ofthe even polynomial 7T1I(x), It is well known 37) that this amounts to com-puting the eigenvalues of a symmetric Jacobi matrix of order n, with zerodiagonal, derived in a straightforward manner from the recurrence relation(60). Furthermore, as pointed out by Golub 14), such a problem can be re-duced to a singular value problem for a bidiagonal matrix of order n/2.To explain the machinery, let us first introduce normalized versions !{ik(X)

of the orthogonal polynomials 7Tix), given by

(66)

for k = 0,1, ... ,11. (Recall that iro = 1.) When expressed in terms of the poly-nomials (66), the recurrence relation (60) assumes the familiar symmetricform

(67)

where k = 0,1, ... ,11 - 1 (with !{i-leX) = 0); the new coefficients 13k are de-fined by

(68)

The set of formulas (67) can be written as the vector identity

where T is the tridiagonal matrix

P. Delsarte and Y. Genin

(70)f3n-l

f3n-l 0

It follows from (70) that the zeros of 7Tn(X) are the eigenvalues of T, whichis a very interesting observation from a numerical viewpoint.

Going a step further 14,38), let us consider the bidiagonal matrix B, of or-der n12, given by

(71)

Set UI = [I/Io(xl),I/Iz(XI), ... ,I/In-2(XI)V and VI = [I/Il(XI),I/I3(XI), ... ,I/In-l(XI)V, fort = 1,2, ... ,nI2, with XI a zero of I/In(x). It is seen that (69) decomposes, forthe value X = XI' into the pair of linear systems

(72)

As a result, the positive elements XI of the symmetric pairs of zeros {XI' -XI}of the polynomial 7Tn(X) are the singular values of the bidiagonal matrix B.

Closely related to the subject above is the spectral decomposition problemfor orthogonal Hessenberg matrices 15). Recall that a square matrix is said tobe (upper) Hessenberg if all entries below the subdiagonal are zero. Anymatrix is known to be reducible to Hessenberg form by elementary ortho-gonal transformations. Furthermore, any Hessenberg matrix H can be fac-torized in the form H = Q R with Q an orthogonal Hessenberg matrix andR an upper triangular matrix 24). An important class of algorithms for linearalgebra problems are based on such a factorization property. Observationsof that type motivate the interest in orthogonal Hessenberg matrices.

Consider now an -orthogonal Hessenberg matrix Q = [qi,j: 1.;;; i.j e: n].Without loss of generality, assume Qi,i-l > 0 for all i. Then the nontrivialentries of Q can be written in the form 15,39)

_ {-Pi-l/-Li/-Li+l ... !Lj-lPj for j "'" i,qi,j - /-LH for j = i-I, (73)

366 Philips Journal of Research Vol. 43 Nos 3/4 1988

Phillps Journalor Research Vol. 43 Nos 3/4 1988 367

A survey of the split approach based techniques

where Pi and f.Li are real numbers subject to the conditions Po = 1, Pn = ±1,and

I

IpA < 1, f.Li = (1 - pf)2 for i = 1,2, ... ,n - 1. (74)

Conversely, any (n + l)-tuple of numbers Pi enjoying the properties aboveyields an orthogonal Hessenberg matrix Q of order n. In the sequel, we shallonly examine the case Pn = 1. (The other possibility, Pil = -1, can be treatedin a similar manner.)Let go(z) = 1, gl(Z), ... ,gn-l(Z) denote the sequence of the Szegö ortho-

normal polynomials (18) generated from the given number Pl>P2,'" ,Pn-l(taken as Schur-Szegö parameters) by means ofthe recurrence relation (16).Note that we have 0'0 = 1 and O'k = f.LT... f.L~ for k ;;;.1, in view of (7) and(74). It can be verified, by use of (73), that the polynomials gk(Z) satisfy theidentity

[gO(Z),gl(Z), ... ,gn-l(Z)] (z 1-Q) = [O, ... ,O,gll_l(Z) + Zgll_l(Z)], (75)

As a consequence, a simple application of Cramer's rule yields

det (z 1-·Q) = all_l(z) + Z lÎII_1(z). (76)

(Note that the right-hand side equals all(z), given by (5) with Pil = 1.) In viewof (76), the eigenvalues of the orthogonal Hessenberg matrix Q are rhe ze-ros of the symmetric polynomial Pn(z), defined as in (29), corresponding tothe sequence of Schur-Szegö parameters Pl>pz,"',Pn-l' This result is due toAmmar, Gragg and ReicheI15).

As a conclusion, the technique explained above, in the framework of thePisarenko problem, can be applied to determine the eigenvalues z, = exp(iwl)of Q. These numbers are immediately available from the singular values XI

= cos w!2 of the bidiagonal matrix B displayed in (71). The entries 13kof Bare given in terms of the data Pi by the simple formula

(77)

resulting from (68) and (39). Without going into detail, let us point out thatthe eigenveetors of Q are obtainable explicitly from the singular vectors Ut

and VI of B (see (72)).

P. Delsarte and Y. Genin

In the same mathematical context, let us mention an interesting speechprocessing application based on the properties of the zeros of the symmetricor antisymmetrie parts Pk(Z) of the predietor polynomials ak(z) relative to acertain Toeplitz matrix CII" In the application alluded to, the zeros of suchpolynomials Pk(Z) are known to be related to the formants of a given speechtime frame; this property is used in the line spectral pair speech synthesizertechnique 16,40). The method has been recently refined through an analysisof the convergence properties of the zeros of piz) for increasing values ofk. The formant tracking technique thus obtained has the remarkable featureof being applicable even to unvoiced sounds, although the physical formantsare loosely defined in that case. Moreover, this technique insures a smoothtransition between the regions of voiced sounds and unvoiced sounds; inparticular, it does not introduce any audible parasitic effect in speech syn-thesis. These results are due to Willems 17).As a second general theme, let us consider how the split approach can be

extended from the linear prediction problem to the joint process estimationproblem for two jointly stationary signals 18-20). Recall that the latter prob-lem occurs in many important DSP applications, such as echo cancellation,noise cancellation, and channel equalization 41). From an algebraic view-. point, it amounts to a system of linear equations of the form

c,», = a; (78)

with CII a given positive definite Toeplitz matrix and dil (do,dl, ••• ,dllV agiven vector. Note that the special case dl = ... = dil = 0 yields the linearprediction problem (1). The solution vector bil of (78) can be computed bymeans of the following extended version of the Levinson algorithm 24).

Like in (3), consider the set of auxiliary linear systems

(79)

for k = O,l, ... ,n, with dk = (do, ... ,dk)T and hk = (bk,o, ... ,bk,kV, Examiningtwo successive values of k, we deduce, by subtraction, the vector identity

bk,k-I-bk-1,k-1 0bk,k dk-Yk

368 Philips Journni of Research Vol. 43 Nos 3/4 1988

A survey of the split approach based techniques

where the number "Ik can be computed from bk-1 by means of the scalarproduct

k-l"Ik= L ck-;bk-1,i'

;=0(81)

Setting the polynomial bk(z) = kf=o bk,;z; and comparing (80) to (3), we ob-tain the recurrence formula

(82)

where Ilk is given by

(83)

Thus, the desired polynomial b,,(z) can be computed by means of the re-currence relations (5) and (82) together with the scalar products (6) and (81).Let us point out that, like in the linear predication problem (sec. 2), thereexist extended Schur and lattice algorithms to solve the joint process esti-mation problem; both of them can be deduced from the extended Levinsonalgorithm above.Let us now examine a more economical split counterpart of this algo-

rithm. To that end, we first introduce the symmetric polynomial

(84)

for k = O,l, ... ,n. It is easily verified, from (82) and (84), that the solutionpolynomial b,,(z) of the problem (78) can be retrieved from q"-l(Z),qn(z) andP,,+l(Z) through the formula

(1 - z) b,,(z) = q,,(z) - Z q"-l(Z) - f.L"Pn+l(Z), (85)

with f.Ln= IlnP;;.L,o. It is clear from (85) that the coefficient f.L" is obtainableas the ratio

qi1) - qn-l(l)f.L" = p"+l(l) . '

(86)

Phllips Journalor Research Vol. 43 Nos 3/4 1988 369

k-I

Çk = 2: CHI qk-I.i·i=O

(88)

P. Delsarte and Y. Genin

It remains to provide a recursive method to compute the sequence of sym-metrie polynomials qiz) = !'f=Oqk,iZi. Like in (34), let us consider the ma-trix identity

SO,k+l s" Çk+l ÇkO,k

Sl,k Sl,k-l SO,k SO,k-lCk+l[qk+l, qk' Z qk> z qk-l] = (87)

Sk,l Sk,O Sk-l,l Sk-l,O

Sk+l,O Çk+l Sk,O Çk

with Si,j = di + dj' This results from (79) and (84). The entry Çk in (87) is ofcourse given by

In view of the property Si,j + SU,iJ = Si". + su.j' combining (87) and (33) im-mediately yields the recurrence formula

where the coefficient TIis determined by

(90)

It is clear that supplementing the formulas (35), (36), (38) of the splitLevinson algorithm by the new formulas (88), (89), (90) yields an effectivesolution algorithm for the joint process estimation problem (78). Whencompared to the extended Levinson algorithm, the new 'split method' justdescribed exhibits an economy in the number of multiplications by a factor2 (roughly speaking). Similar techniques as in sec. 3 lead to Schur type andlattice type versions of this method 20); they will not be treated here. Let usfinally point out that qnCz)/2 can be interpreted as the minimum mean squareerror filter of length n + 1 in the class of linear phase filters 42).The split approach is not necessarily restricted to the positive definite case.

An interesting example' of the applicability of this approach to arbitrarysymmetric Toeplitz matrices is provided (at least implicitly) by the Bistritztest21•22) to count the zeros of any polynomial an(z) in the unit disc [z]< 1.

370 Phllips Journal of Research Vol. 43 Nos 3/4 1988

Philips Journalof Research Vol. 43 Nos 3/4 1988 371

A survey of the split approach based techniques

The method in question uses the descending form (53) of the recurrence re-lation (36), with the initialization (52), exactly as in the Bistritz stability cri-terion. In the 'regular case', it produces a complete sequence of polynomialspiz), for k = n + 1, n, ... ,l,D. It can be shown that the number of zeros ofan(z) in the unit disc (counted with their multiplicities) equals the numer ofnegative elements among the Jacobi parameters Àk = Pk(l)/Pk-i(l) with k =1,2, ... ,n. In the singular case where this simple method breaks down (e.g.when Pk(O) = 0 for some k), there exist appropriate modifications of the basicprocedure above. Thus, the Bistritz test can be made completely general. Adetailed discussion is beyond the scope of this survey.Certain techniques relying on the split approach can even be extended to

some matrices that do not exhibit the Toeplitz structure. Such is actually thecase for the class of positive definite matrices T; = [ti,j : 0 :s;; i,j :s;; n] havingdisplacement signature diag (1, -1) and Toeplitz distance one (or zero) 23,43).

The first property amounts to the fact that T; is expressible in terms of asuitable positive definite Toeplitz matrix C; in the form

(91)

where U; is an upper triangular Toeplitz matrix with unit diagonal i"). Thesecond property implies that the entries of C; and U; are related by

(92)

where (l,ul> ... ,un) denotes the first row of U; and w is an appropriate realnumber. Here, Uk stands for the top submatrix of order k + 1 of Un- As aconsequence of (92), we obtain the relation

-wel]U-i ,k-i

(93)

with Ck = (Cl>C2,... ,Ck)T.From the solution vector ak of the Toeplitz system (3) and from its mirror

image (ik let us construct both vectors

(94)

Set the polynomials ak(z) = (l, ...,zk)a" and bk(z) = (l, ...,zk)b". When ex-pressed in terms of these new polynomials, the Levinson relations (5) and(6) assume the form

P. Delsarte and Y. Genin

k-1(1 + weO)CTk-1Pk = - L tO.i+1b~-1.i' (96)

i=O

Henceforth, we make the assumption weo =1= - 1. (If this is not satisfied,there exist appropriate substitutes for the specific procedure described here.)The initial conditions are a~(z) == 1 + weo and b~(z) = 1. Combined with(7), the formulas (95) and (96) allow one to recursively compute the poly-nomials a~(z) and b~(z). Note that these are generally not reciprocal of eachother, so that both of them have actually to be processed by the algorithm.It is easily seen that b~ is proportional to the last column of Tk 1. Therefore,the method above can be used to perform the Cholesky factorization of Tno

A reduced complexity procedure 23,43) for the same problem can be de-rived from a simple translation of the split Levinson algorithm. It involvesa sequence of polynomials p~(z) of the form

(97)

Note that p~(z) is generally not symmetric. Elementary algebraic manipu-lations allow one to show that, for a suitable choice of the coefficients p~0,

the polynomials (97) satisfy a recurrence relation formally identical to (36),i.e.,

(98)

Simple initial conditions are p~(z) = 1 + weo/2 and p;(z) = 1 + weo + z.The parameter a~ can be determined with the help of the formulas

(99)

The desired polynomial b;,(z) is computable in a final stage through an iden-tity quite similar to (32), namely,

with À~+1 = p~+1(1)/p;,(1). Although the polynomials p~(z) enjoy no sym-metry property, a complexity reduction is obtained from the fact that the

372 Philips Journalof Research Vol. 43 Nos 3/4 1988

A survey of the split approach based techniques

split method, based on (98), processes a single polynomial sequence insteadof two sequences as in the initial method, based on (95).It remains an open question to know whether the split approach can be

extended to larger classes of matrices. However, as the method applies suc-cessfully to block-Toeplitz matrices 45,46), on the one hand, and as 'arbitrarymatrices' are nicely embeddable into block-Toeplitz matrices 47,48), on theother hand, it seems rather plausible that the answer will be affirmative.For the sake of simplicity, all techniques presented in this paper have been

restricted to the case of real scalar data. However, most of them can be ex-tended to the complex case; see especially refs 12,19,22,43,49 and 50. Theonly noticeable difference is concerned with the connection between 'poly-nomials orthogonal on the unit circle and polynomials orthogonal on the realline; this property belongs to the real case exclusively. Furthermore, part ofthe theory can be extended to positive definite Hermitian block- Toeplitzmatrices 45,46). The main exceptions are concerned with the stability problemfor matrix polynomials (which is not suprising since it is unknown whetherthe Schur-Cohn method itself admits a generalization to the matrix case) andwith the Pisarenko problem and related eigenvalue type problems.

REFERENCESI) T. Kailath, Linear Least-Squares Estimation, Dowden, Hutchinson and Ross, Strouds-

burg, PA, 1977.2) T. Kailath, IEEE Trans. Inform. Theory, IT-20, 145 (1974).3) J. Makhoul, IEEE Proc., 63, 561 (1975).4) S.M. Kay and S.L. Marple, Jr., IEEE Proc., 69, 1380 (1981).5) U. Grenander and G. Szegö, Toeplitz Forms and their Applications, University of Cal-

ifornia Press, Berkeley, CA, 1958.6) G. Szegö, Orthogonal Polynomials, American Mathematical Society, New York, 1959.7) N. 1. Akhiezer, The Classical Moment Problem, Oliver and Boyd, London, 1956.S) P. Delsarte and Y. Genin, IEEE Trans. Acoust. Speech Signal Process., ASSP-34, 470

(1986).9) P. Delsarte and Y. Genin, IEEE Trans. Acoust. Speech Signal Process., ASSP-35, 645

(1987).10) Y. Bistritz, Proc. Intern. Symp. Mathematical Theory of Networks and Systems, Beer-

Sheva, Israel, 1983, p. 69.11) Y. Bistritz, IEEE Trans. Circuits and Systems, CAS-30, 917 (1983).12) P. Delsarte and Y. Genin, SIAM J. Math. Anal., 19, 718 (1988).13) V.P. Pisarenko, Geophys. J.R. Astr. Soc., 33, 347 (1973).14) G.H. Golub, Apl. Mat., 13,44 (1968).15) G.S. Ammar, W.B. Graggand L. Reichel, Proc. 25th IEEE Conf. Decision and Con-

trol, Althens, 1986.16) N. Sugamura and F. Itakura, Speech Communication,S, 199 (1986).17) L.F. Willems, Proc. Europ. Conf. on Speech Technology, Edinburgh, 1987, p. 250.IS) LR. Bellegarda and D.C. Farden, IEEE Trans. Circuits and Systems, CAS-34, 712

(1987).19) H. Krishna and S.D. Morgera, IEEE Trans. Acoust. Speech Signal Process., ASSP-35,

839 (1987).20) P. Delsarte and Y. Genin, IEEE Trans. Inform. Theory, to appear.21) Y. Bistritz, IEEE Proc., 72, 1131 (1984).22) P. Delsarte, Y. Genin and Y. Kamp, Philips J. Res., 39, 226 (1984).

Phllips Journalof Research Vol. 43 Nos 3/4 1988 373

374 Philips Journalof Research Vol. 43 Nos 3/4 1988

P. Delsarte and Y. Genin

23) Y. Bistritz, H. Lev-Ari and T. Kailath, Proc. IEEE Intern. Conf. Acoustics SpeechSignal Process., Tokyo, 1986, p. 253.

24) G .H. Golub and C.F. Van Loan, Matrix Computations, North Oxford Academic, Ox-ford, 1983.

25) J. Le Roux and C. Gueguen, IEEE Trans. Acoust. Speech Signal Process., ASSP-25,257 (1977).

26) F. Itakura and S. Saito, Proc. 7th Intern. Congr. Acoustics, Budapest, 1971, p. 261.27) M. Marden, Geometry of Polynomials, American Mathematical Society, Providence, RI,

1966.28) P. Henrici, Applied and Computational Complex Analysis, Wiley, New York, 1974.29) Y. Genin, Proc. Europ. Conf. Circuit Theory and Design, Paris, 1987, p. 195.30) P. Delsarte, Y. Genin, Y. Kamp and P. Van Dooren, Philipsj. Res., 37, 277 (1982).31) P. Delsarte and Y. Genin, Proc. IEEE Intern. Symp. Circuits and Systems, Philadel-

phia, PA, 1987, p. 140.32) Y. Bistritz, H. Lev-Ari and T. Kailath, Proc. IEEE Intern. Conf. Acoustics Speech

Signal Process., Dallas, TX, 1987, p. 21.33) G. Cybenko, SIAM J. Sci. Stat. Comput., 5, 317 (1984).34.) Y. Bistritz, IEEE Trans. Circuits and Systems, CA~32, 1162 (1985).35.) Y. Genin, Proc. 1st Intern. Conf. Industrial and Applied Mathematics, Paris, 1987.36.) G. Cybenko, Proc. Conf. Information Systems and Sciences, Princeton, NJ, 1984.37.) B. N. Parlett, The Symmetric Eigenvalue Problem, Prentice-Hall, Englewood Cliffs, NJ,

1980.38.) G.J. Golub and W. Kahan, SIAM J. Numer. Anal. Ser. B, 2, 205 (1965).39') H. Kimura, IEEE Trans. Circuits and Systems, CAS-31, 1130 (1985).40) F. Itakura and N. Sugamura, Proc. Speech Study Group of the Acoustical Society of

Japan, S79-46, 1979.41.) M.L. Honig and D.G. Messerschmitt, Adaptive Filters, Structures, Algorithms, and

Applications, Kluwer Academic Publishers, Boston, MA, 1984.42.) D. Manokalis, G. Carayannis and N. Kalouptsidis, IEEE Trans. Circuits and Sys-

tems, CAS-31, 974 (1984).43) S.D. Morgera and H. Krishna, Proc. IEEE Intern. Symp. Circuits and Systems, Phil-

adelphia, PA, 1987, p. 84.44') P. Delsarte, Y. Genin and Y. Kamp, Proc. Intern. Symp. Mathematical Theory of Net-

works and Systems, Santa Monica, CA, 1981, p. 40.45.) P. Delsarte and Y. Genin, IEEE Trans. Circuits and Systems, CAS-35, 190 (1988).46.) P. Delsarte and Y. Genin, Proc. Intern. Symp. Mathematical Theory of Networks and

Systems, Phoenix, AZ, 1987.47') P. Delsarte, Y. Genin and Y. Kamp, Linear Algebra Appl., 5't, 97 (1983).48.) P. Delsarte, Y. Genin and Y. Kamp, IEEE Trans. Acoust. Speech Signal Process.,

ASSP-33, 393 (1985).49.) Y. Bistritz, Systems Control Lett., 7, 89 (1986).so.) B-, Krishna, S.D. Morgera and H. Krishna, Proc. IEEE Intern. Conf. Acoustics

Speech Signal Process., Dallas, TX, 1987, p. 1839.

AuthorsP. Delsarte; Ir. degree (Electrical Engineering and Applied Mathematics), University ofLouvain, Belgium, 1965 and 1966; Dr. degree (Applied Sciences), University of Louvain, 1973;Philips Research Laboratory Brussels, 1966- . His research interests include algebraic codingtheory, combinatorial mathematics, and the theory and applications of orthogonal polynomialsand Toeplitz matrices.

Y. Genin; Ir. degree (Electrical Engineering), University of Louvain, Belgium, 1962; Dr. de-gree (Applied Sciences), University of Liège, Belgium, 1969; Philips Research LaboratoryBrussels, 1963- ; Consulting Professor at Stanford University (1979-1980); Visiting Professorat Facultés Universitaires de Namur (1974-1976 and 1984-1985). He is currently heading theApplied Mathematics Group at PRLB. His principal research interests concentrate on themathematical aspects of signal processing, network theory and system theory. He is a Fellowof the IEEE Society and is currently serving as an associate editor for the journals IEEE Trans-actions on Circuits and Systems, SIAM Journalon Matrix Analysis and Applications, Mathe-matics of Control, Signals and Systems, and Philips Journalof Research.