13
Automatica 43 (2007) 1495 – 1507 www.elsevier.com/locate/automatica Minimum phase properties of finite-interval stochastic realization Hideyuki Tanaka a , , Tohru Katayama b a Department of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan b Faculty of Culture and Information Science, Doshisha University, Kyo-Tanabe, Kyoto 610-0394, Japan Received 3 August 2005; received in revised form 30 April 2006; accepted 6 February 2007 Available online 9 July 2007 Abstract A finite-interval stochastically balanced realization is analyzed based on the idealized assumption that an exact finite covariance sequence is available. It is proved that a finite-interval balanced realization algorithm [Maciejowski, J. M. (1996). Parameter estimation of multivariable systems using balanced realizations. In S. Bittanti, & G. Picci (Eds.), Identification, adaptation, learning (pp. 70–119). Berlin: Springer] provides stable minimum phase models, if the size of the interval is at least two times larger than the order of a minimal realization. New algorithms for finite-interval stochastic realization and stochastic subspace identification are moreover derived by means of block LQ decomposition, and the stability and minimum phase properties of models obtained by these algorithms are considered. Numerical simulation results are also included. 2007 Elsevier Ltd. All rights reserved. Keywords: Stochastic realization; Finite interval; Subspace identification; Canonical correlation analysis; Innovation representation; Hilbert space; LQ decomposition 1. Introduction The stochastic realization problem is to find Markov mod- els whose output covariance matrices match a given covariance sequence (Faurre, 1976; Faurre, Clerget, & Germain, 1979). In the stochastic realization algorithm (Faurre, 1976), a factor- ization of the block Hankel matrix formed by an infinite co- variance sequence is calculated, and a Riccati equation is then solved to find an innovation representation. A novel realization method has been developed by Akaike (1974, 1975) based on the canonical correlation analysis (CCA), and a stochastic bal- anced realization algorithm has been then derived based on the CCA by Desai and Pal (1984) and Desai, Pal, and Kirkpatrick (1985). Recently, the stochastic realization problem has been re-visited, and a new stochastic realization algorithm has been proposed in a Hilbert space (Tanaka & Katayama, 2006). This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor Brett Ninness under the direction of Editor Torsten Söderström. Corresponding author. Tel.: +81 75 753 4754; fax: +81 753 5507. E-mail addresses: [email protected] (H. Tanaka), [email protected] (T. Katayama). 0005-1098/$ - see front matter 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.automatica.2007.02.004 It is well known (Gevers, 2006; Katayama, 2005) that stochastic subspace identification algorithms ( Van Overschee & De Moor, 1993, 1996) have been developed on the basis of stochastic realization theory, where the stochastic subspace identification algorithms estimate state space models from fi- nite strings of time-series data. However, it has been pointed out that state space identification algorithms ( Aoki, 1990; Van Overschee & De Moor, 1993) may fail to solve a Riccati equation; this failure is related to a non-trivial mathematical problem of positive realness in the stochastic realization theory (Lindquist & Picci, 1996a). A finite-interval stochastic realization in a Hilbert space has been derived by Lindquist and Picci (1996a, 1996b) based on the non-steady state Kalman filter in order to analyze state space identification algorithms ( Aoki, 1990; Van Overschee & De Moor, 1993) in the light of geometric theory of stochastic realization. They have discussed in detail the state space mod- eling of time series under the three different assumptions that (i) an exact infinite covariance sequence is available, (ii) an exact finite covariance sequence is available, and (iii) a finite string of time-series data is available. In particular, under the assumption (ii), they have studied the finite-interval realization in a Hilbert space from the view point of positive realness.

Minimum phase properties of finite-interval stochastic realization

Embed Size (px)

Citation preview

Page 1: Minimum phase properties of finite-interval stochastic realization

Automatica 43 (2007) 1495–1507www.elsevier.com/locate/automatica

Minimum phase properties of finite-interval stochastic realization�

Hideyuki Tanakaa,∗, Tohru Katayamab

aDepartment of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University, Kyoto 606-8501, JapanbFaculty of Culture and Information Science, Doshisha University, Kyo-Tanabe, Kyoto 610-0394, Japan

Received 3 August 2005; received in revised form 30 April 2006; accepted 6 February 2007Available online 9 July 2007

Abstract

A finite-interval stochastically balanced realization is analyzed based on the idealized assumption that an exact finite covariance sequence isavailable. It is proved that a finite-interval balanced realization algorithm [Maciejowski, J. M. (1996). Parameter estimation of multivariablesystems using balanced realizations. In S. Bittanti, & G. Picci (Eds.), Identification, adaptation, learning (pp. 70–119). Berlin: Springer] providesstable minimum phase models, if the size of the interval is at least two times larger than the order of a minimal realization. New algorithms forfinite-interval stochastic realization and stochastic subspace identification are moreover derived by means of block LQ decomposition, and thestability and minimum phase properties of models obtained by these algorithms are considered. Numerical simulation results are also included.� 2007 Elsevier Ltd. All rights reserved.

Keywords: Stochastic realization; Finite interval; Subspace identification; Canonical correlation analysis; Innovation representation; Hilbert space; LQdecomposition

1. Introduction

The stochastic realization problem is to find Markov mod-els whose output covariance matrices match a given covariancesequence (Faurre, 1976; Faurre, Clerget, & Germain, 1979).In the stochastic realization algorithm (Faurre, 1976), a factor-ization of the block Hankel matrix formed by an infinite co-variance sequence is calculated, and a Riccati equation is thensolved to find an innovation representation. A novel realizationmethod has been developed by Akaike (1974, 1975) based onthe canonical correlation analysis (CCA), and a stochastic bal-anced realization algorithm has been then derived based on theCCA by Desai and Pal (1984) and Desai, Pal, and Kirkpatrick(1985). Recently, the stochastic realization problem has beenre-visited, and a new stochastic realization algorithm has beenproposed in a Hilbert space (Tanaka & Katayama, 2006).

� This paper was not presented at any IFAC meeting. This paper wasrecommended for publication in revised form by Associate Editor BrettNinness under the direction of Editor Torsten Söderström.

∗ Corresponding author. Tel.: +81 75 753 4754; fax: +81 753 5507.E-mail addresses: [email protected] (H. Tanaka),

[email protected] (T. Katayama).

0005-1098/$ - see front matter � 2007 Elsevier Ltd. All rights reserved.doi:10.1016/j.automatica.2007.02.004

It is well known (Gevers, 2006; Katayama, 2005) thatstochastic subspace identification algorithms (Van Overschee& De Moor, 1993, 1996) have been developed on the basisof stochastic realization theory, where the stochastic subspaceidentification algorithms estimate state space models from fi-nite strings of time-series data. However, it has been pointedout that state space identification algorithms (Aoki, 1990; VanOverschee & De Moor, 1993) may fail to solve a Riccatiequation; this failure is related to a non-trivial mathematicalproblem of positive realness in the stochastic realization theory(Lindquist & Picci, 1996a).

A finite-interval stochastic realization in a Hilbert space hasbeen derived by Lindquist and Picci (1996a, 1996b) based onthe non-steady state Kalman filter in order to analyze statespace identification algorithms (Aoki, 1990; Van Overschee &De Moor, 1993) in the light of geometric theory of stochasticrealization. They have discussed in detail the state space mod-eling of time series under the three different assumptions that(i) an exact infinite covariance sequence is available, (ii) anexact finite covariance sequence is available, and (iii) a finitestring of time-series data is available. In particular, under theassumption (ii), they have studied the finite-interval realizationin a Hilbert space from the view point of positive realness.

Page 2: Minimum phase properties of finite-interval stochastic realization

1496 H. Tanaka, T. Katayama / Automatica 43 (2007) 1495–1507

Based on the assumption (ii), a finite-interval stochasticallybalanced realization (an approximate stochastic realization)algorithm has been developed by Maciejowski (1996), adapt-ing the stochastic balanced realization (Desai et al., 1985) to afinite covariance sequence. The algorithm computes the singu-lar value decomposition (SVD) of a finite block Hankel ma-trix to find an approximate innovation representation withoutsolving Riccati equations. It appears to always give mini-mum phase models, though this fact has not been proved(Maciejowski, 1996).

In this paper, by extending the earlier results of Tanakaand Katayama (2003, 2004), we will consider a finite-interval realization based on the assumption (ii). More specif-ically, we assume that a positive real covariance sequence{�0, �1, . . . ,�2�−1} is available, and that the covariance ma-trix has a decomposition �k = HFk−1�, where (F , �, H) is aminimal realization with F ∈ Rn×n stable. We further dividethe case (ii) into two cases: (ii)(a) � > n and (ii)(b) ��n. Byusing the non-steady state Kalman filter (Lindquist & Picci,1996a), we show that the finite-interval balanced realization(Maciejowski, 1996) is stable and of minimum phase underthe assumption (ii)(a). Under the assumption (ii)(b), however,we see that the finite-interval realization does not always pro-vide stable minimum phase models, since we cannot find anexact realization (F , �, H) from the decomposition of �k

effective in (ii)(a).Based on the analysis of stability and minimum phase, we

will moreover present realization and identification algorithms.In fact, under the assumption (ii)(a), we derive a finite-intervalrealization algorithm from the results due to Lindquist and Picci(1996a) by using block LQ decomposition, where stabilityand minimum phase properties of the finite-interval realizationare guaranteed.1 Under the assumption (iii), we further de-velop a stochastic subspace identification algorithm by adapt-ing the finite-interval realization to a finite string of time-seriesdata; however, stability and minimum phase properties are notguaranteed.

The objective of this paper is to give extensive analysisof minimum phase properties of realizations obtained underdifferent assumptions. For finite-interval stochastic realizationand stochastic subspace identification, however, the problemsassociated with positive realness have been focused in lit-erature (e.g. Dahlén, Lindquist, & Mari, 1998; Lindquist &Picci, 1996a).

The rest of the paper is organized as follows. Section 2 re-views the stochastic realization theory. Section 3 proves thatthe finite-interval balanced realization algorithm provides stableminimum phase models, and Section 4 presents a new finite-interval realization algorithm. Section 5 discusses extension toa subspace identification problem. Section 6 shows numericalsimulations and Section 7 concludes the paper. Appendix in-cludes proofs of lemmas and theorems.

1 We will not consider the positive extension of covariance sequenceunder the assumption (ii)(b) in order to focus on rather deriving a subspaceidentification algorithm (see Lindquist & Picci, 1996a).

2. Preliminaries

2.1. Stationary process

Consider a second-order stationary process {yt , t =0, ±1, . . .}, where yt is a p-dimensional non-deterministicprocess with mean zero and covariance matrices

�k = E{yt+ky�t }, k = 0, ±1, . . . . (1)

We assume that �k (k = 0, ±1, . . .) is a positive real sequence(Katayama, 2005; Lindquist & Picci, 1996a), and satisfies∑∞

t=−∞‖�t‖ < ∞. The spectral density function of yt can bethen computed as an ordinary Fourier transform

Υ (z) =∞∑

j=−∞�j z

−j .

We further assume that Υ (z) is coercive i.e. Υ (z) > 0, |z| = 1,z ∈ C, and that Υ (z) has no zeros at the origin and infinity in thecomplex plane. Suppose moreover that Υ (z) is a rational spec-tral density function, i.e. the covariance matrix has a decompo-sition �k=HFk−1� (k=1, 2 . . .), where (F, �, H) is a minimalrealization and F ∈ Rn×n. In terms of (F, �, H), the causalcomponent of Υ (z) is given by �(z) = H(zI − F)−1� + 1

2�0,and we hence have

Υ (z) = �(z) + ��(z−1). (2)

It should be noted that F is stable from∑∞

t=−∞‖�t‖ < ∞.

2.2. Hilbert space of a sampled function

We introduce a Hilbert space (Lindquist & Picci, 1996a,1996b). For an infinite string of data {yt , t = 0, ±1, ±2, . . .},let the tail matrix be defined by

yt := [yt yt+1 yt+2 · · ·] ∈ Rp×∞.

We then define a vector space as

Y∞ :={∑

a�k yk | ak ∈ Rp, k = 0, ±1, . . .

},

which is a linear space spanned by all finite linear combina-tions of row vectors of yt . For a�yi , b�yj ∈ Y∞, define abilinear form (inner product) as 〈a�yi , b

�yj 〉 = a��i−j b. Bycompleting the vector space with the norm induced by the in-ner product 〈·, ·〉, we have a Hilbert space, which is also writtenas Y∞. For an element A ∈ Y∞ and a subset B ⊆ Y∞, wewrite the orthogonal projection of A onto B as E(A|B).

We extend Y∞ to Y•×∞ so that matrices are includedas its elements: Yk×∞ := {[A�

1 · · · A�k ]�|Aj ∈ Y∞}.

For A ∈ Y•×∞, B ⊆ Y∞, we write a projection asE(A |B)=[(E(A1|B))� · · · (E(A•|B))�]�. For A ∈ Y•×∞,we describe the row space of A as span {A} = span {A1} ∨· · · ∨ span {A•}, where A ∨ B = {a + b|a ∈ A, b ∈ B}. ForA, B ∈ Y•×∞, we write E(A|B) := E(A|span{B}).

Page 3: Minimum phase properties of finite-interval stochastic realization

H. Tanaka, T. Katayama / Automatica 43 (2007) 1495–1507 1497

2.3. Stochastic balanced realization

We review stochastic balancing via the CCA (Desai et al.,1985). A description of this subsection is based on the assump-tion that (i) an infinite covariance sequence (1) is available, orequivalently {yt ∈ Yp×∞|t =0, ±1, . . .} is available. We definethe past and future matrices as

Y−t :=

⎡⎢⎢⎣

yt−1yt−2yt−3

...

⎤⎥⎥⎦ , Y+

t :=

⎡⎢⎢⎣

yt

yt+1yt+2

...

⎤⎥⎥⎦ .

We furthermore define the covariance matrices as

� :=

⎡⎢⎢⎣

�0 �1 �2 · · ·��

1 �0 �1 · · ·��

2 ��1 �0 · · ·

......

.... . .

⎤⎥⎥⎦ = 〈Y−

t , Y−t 〉,

� :=

⎡⎢⎢⎣

�0 ��1 ��

2 · · ·�1 �0 ��

1 · · ·�2 �1 �0 · · ·...

......

. . .

⎤⎥⎥⎦ = 〈Y+

t , Y+t 〉,

where � = 〈Y−t , Y−

t 〉 > 0 and � = 〈Y+t , Y+

t 〉 > 0 hold, and �and � are invertible, since the spectral density function of yt ,Υ (z), is coercive and F is stable (Katayama, 2005; Lindquist& Picci, 1996a). We also definethe block Hankel matrix as

H :=

⎡⎢⎢⎣

�1 �2 �3 · · ·�2 �3 �4 · · ·�3 �4 �5 · · ·...

......

. . .

⎤⎥⎥⎦ = 〈Y+

t , Y−t 〉. (3)

Since �k has a decomposition �k=HFk−1� where F ∈ Rn×n,we have rank H = n, and compute the SVD of the weightedHankel matrix H:

�−1/2H�−�/2 = U �V �, � ∈ Rn×n (4)

We define extended observability and reachability matrices, re-spectively, as

O := �1/2U �1/2, C := �1/2V V ���/2. (5)

The block Hankel matrix H has then a canonical decompositionH =OC, where O and C are described by certain matrices A ∈Rn×n, G ∈ Rn×p and C ∈ Rp×n as

O = [(C)� (CA)� (CA2)� · · · ]�, (6)

C = [G AG A2G · · ·]. (7)

It follows from (5) that C�−1C� = � = O��−1O holds, andhence � satisfies (Desai et al., 1985) both of the forward Riccatiequation

� = A�A� + (G − A�C�)

× (�0 − C�C�)−1(G − A�C�)�, (8)

and the backward Riccati equation

� = A��A + (C − G��A)�

× (�0 − G��G)−1(C − G��A).

In this case, the triplet (A, G, C) is stochastically balanced,and the matrix � given by the SVD (4) is a stabilizing solutionto the Riccati equation (8) (Desai et al., 1985; Faurre, 1976).It should be noted that there exists a stabilizing solution � to(8), if and only if �(z)= C(zI − A)−1G+ 1

2�0 is positive realand (A, G, C) is a minimal realization. In terms of �, definematrices as

R := �0 − C�C�,

K := (G − A�C�)(�0 − C�C�)−1.

The matrix A − KC is then stable. It should be noted that A isalso stable, since there exists a non-singular matrix T such thatA = T −1FT from (3), (6), (7) and �k = HFk−1�.

Let us write variables as vt=yt−E(yt |Y−t ) and xt=C�−1Y−

t .We then have E(Y+

t |Y−t ) = Oxt (Akaike, 1975; Faurre, 1976),

and hence a forward innovation representation of yt is given by[xt+1

yt

]=

[A K

C I

] [xt

vt

],

〈vs , vt 〉 = Rst .

The spectral density function of yt is factorized as

Υ (z) = W (z)W�(z−1), (9)

where the canonical spectral factor is given by

W (z) = (C(zI − A)−1K + I )R1/2.

The inverse of W (z) is a stable whitening filter of yt , and theinnovation vt can be computed from yt , since A−KC is stable:vt = R1/2W−1(z)yt .

2.4. A finite-interval balanced realization

We assume that (ii)(a) an exact finite covariance sequence

{�0, �1, �2, . . . ,�2�−1} (10)

is available for � > n; we equivalently assume that {yt ∈Yp×∞|t = 0, 1, . . . , 2� − 1} is available, since we have a finitecovariance sequence (10) from �i−j = 〈yi , yj 〉.

Maciejowski (1996) has derived a finite-interval balancedrealization, adapting the stochastic balanced realization (Desaiet al., 1985) to a finite covariance sequence. We review therealization, assuming that a finite covariance sequence (10) isavailable.

In terms of �j in (10), define covariance matrices

�t :=

⎡⎢⎢⎢⎢⎣

�0 �1 �2 · · · �t−1��

1 �0 �1 · · · �t−2��

2 ��1 �0 · · · �t−3

......

.... . .

...

��t−1 ��

t−2 ��t−3 · · · �0

⎤⎥⎥⎥⎥⎦ ∈ Rpt×pt , (11)

Page 4: Minimum phase properties of finite-interval stochastic realization

1498 H. Tanaka, T. Katayama / Automatica 43 (2007) 1495–1507

�t :=

⎡⎢⎢⎢⎢⎢⎣

�0 ��1 ��

2 · · · ��t−1

�1 �0 ��1 · · · ��

t−2

�2 �1 �0 · · · ��t−3

......

.... . .

...

�t−1 �t−2 �t−3 · · · �0

⎤⎥⎥⎥⎥⎥⎦

∈ Rpt×pt (12)

for t = 1, . . . , 2� and the block Hankel matrix

H� :=

⎡⎢⎢⎢⎢⎣

�1 �2 �3 · · · ���2 �3 �4 · · · ��+1�3 �4 �5 · · · ��+2...

......

......

�� ��+1 ��+2 · · · �2�−1

⎤⎥⎥⎥⎥⎦ ∈ Rp�×p�.

The matrices �t , �t and H� are, respectively, given by trun-cation of �, � and H. It should be noted that the sequence{�t }t �1 is coercive, i.e. �t > I holds for some > 0 for allt �1 (Katayama, 2005; Lindquist & Picci, 1996a).

We compute the canonical decomposition, or the SVD of theweighted block Hankel matrix H� (Lindquist & Picci, 1996a,1996b),

�−1/2� H��

−�/2� = U���V

�� , �� ∈ Rn×n, (13)

where U�� U� = In, V �

� V� = In and rank �� = n. It follows thatextended observability and reachability matrices, O� and C�,are, respectively, given by

O� := �1/2� U��

1/2� , C� := �1/2

� V �� ��/2

� (14)

with rank O� =n and rank C� =n, and hence H� =O�C� holds.The matrices O� and C� are written by means of A ∈ Rn×n,G ∈ Rn×p and C ∈ Rp×n such that

O� = [C� (CA)� · · · (CA�−1)�]�,

C� = [G AG · · · A�−1G] (15)

In fact, there exists a �-dependent non-singular matrix Q� ∈Rn×n such that

C = CQ�, A = Q−1� AQ�, G = Q−1

� G (16)

and Q� → I when � → ∞ (Lindquist & Picci, 1996a). Wethus have a �-dependent realization (A, G, C) satisfying �k =CAk−1G.

Define matrices as O�−1 := O�(1 : (� − 1)p, :), and O↑� :=

O�(p + 1 : �p, :). We then have

O�−1A = O↑� , (17)

where O�−1 has full column rank, since we have O�−1 ∈Rp(�−1)×n and rank O�−1 = n from (ii)(a) n > �. We can thusdetermine the matrix “A” uniquely from Eq. (17) (see also vander Veen, Deprettere, & Swindlehurst, 1993, p. 1282),

A = O†�−1O

↑� , (18)

where (·)† denotes the pseudo-inverse. The matrix A computedfrom (18) is stable, since we have (16).

Adapting the stochastic balancing to the finite covariance se-quence (10), we introduce the finite-interval balanced realiza-tion algorithm (Maciejowski, 1996).2

Finite-interval Balanced Realization Algorithm.Step 1: Compute the SVD of the weighted H� as (13).Step 2: Define extended observability and reachability ma-

trices, O� and C�, respectively, as (14).Step 3: Determine A as (18), and define G and C as

C = O�(1 : p, :), G = C�(:, 1 : p). (19)

Step 4: Determine R� and K� as

R� = �0 − C��C�, (20)

K� = (G − A��C�)(�0 − C��C

�)−1. (21)

Define a spectral density function as

Υ�(z) := W�(z)W�� (z−1), (22)

where W�(z) is given by

W�(z) := (C(zI − A)−1K� + I )R1/2� . (23)

The spectral density function Υ�(z) converges to Υ (z) when� → ∞. It has not been proved that W�(z) is of minimum phaseto the authors’ knowledge (see also Maciejowski, 1996, p. 98).

3. Minimum phase properties

Assume that (ii)(a) a finite covariance sequence (10) is given.We prove that W�(z) in (23) is of minimum phase.

3.1. Riccati equations and finite-interval balancing

We introduce non-steady state Riccati equations in order toprove that W�(z) is of minimum phase. In terms of the �-dependent realization (A, G, C), we define matrices as

Ot = [C� (CA)� · · · (CAt−1)�]� ∈ Rpt×n, (24)

Ct = [G AG · · · At−1G] ∈ Rn×pt (25)

for t = 1, . . . , 2�. We moreover determine matrices as (VanOverschee & De Moor, 1996)

Pt = Ct�−1t C�

t ∈ Rn×n, (26)

P−t = O�t �−1

t Ot ∈ Rn×n (27)

2 The stability of “A” is not guaranteed for (ii)(b) ��n, even if wefollow the finite-interval realization algorithm; this is because rank O�−1 = n

does not always hold for ��n. We see therefore big differences in statespace modeling between the assumptions (ii)(a) � > n and (ii)(b) ��n, since

we have A = O†�−1O

↑� if rank O�−1 = n.

Page 5: Minimum phase properties of finite-interval stochastic realization

H. Tanaka, T. Katayama / Automatica 43 (2007) 1495–1507 1499

for t = 1, . . . , 2� with P0 := 0 and P0 := 0, where matrices�t , �t ∈ Rpt×pt are, respectively, given by (11) and (12).

Proposition 1 (Lindquist & Picci, 1996a; Van Overschee & DeMoor, 1996). The matrices Pt and P−t , respectively, satisfy theforward Riccati equation

Pt+1 = APtA� + (G − APtC

�)

× (�0 − CPtC�)−1(G − APtC

�)� (28)

with P0 = 0 and �0 − CPtC� > 0, and the backward Riccati

equation

P−t−1 = A�P−tA + (C − G�P−tA)�

× (�0 − G�P−tG)−1(C − G�P−tA) (29)

with P0 =0 and �0 −G�P−tG > 0, for t =0, 1, 2, . . . , 2�−1.

The solutions Pt and P−t ∈ Rn×n (t = 0, 1, . . . , 2� − 1)

depend on the fixed middle point on �, and they thus satisfythe Riccati equations (28) and (29) which are defined by the�-dependent (A, G, C) and �0.

Proposition 2 (Lindquist & Picci, 1996a). The matrices P�and P−� satisfy

P� = �� = P−�, (30)

where �� is given by the SVD (13).3 The triplet (A, G, C) isa finite-interval stochastically balanced realization.

It should be also noted that the solutions P� and P−� arepositive definite, i.e. P� > 0 and P−� > 0, from (30).

3.2. Solutions to the non-steady state Riccati equations

We analyze the solutions Pt and P−t to the non-steady stateRiccati equations (28) and (29). Since we have assumed (ii)(a)� > n in (10), the transfer function �(z)=C(zI −A)−1G+ 1

2�0is positive real from (2), (9) and (16). There exist thus solutionsto the following LMI of the variable P

[P − APA� G − APC�

(G − APC�)� �0 − CPC�]

�0, (31)

where P satisfies �0−CPC� > 0. It should be noted that in thecase (ii)(b) ��n the LMI (31) may not have solutions P, evenif the triplet (A, G, C) is computed through Steps 1–3 of thefinite-interval balanced realization algorithm, since the tripletcannot be estimated exactly. We extend the solutions of (28)and (29) for t = 2�, 2�+ 1, . . . . From (16) and �t = CAt−1G,we have

�t = CAt−1G (t = 2�, 2� + 1, . . .) (32)

3 We have (30) from (26), (27) and (14).

and we thus define �t , �t , Ot and Ct for t =2�+1, 2�+2, . . .

as (11), (12), (24) and (25), respectively.4 It should be againnoted that under the assumption (ii)(b), we cannot extend �tbymeans of (32).

Proposition 3. The solutions to (28) and (29), Pt and P−t

(t = 2�, 2� + 1, . . .), are, respectively, given by (26) and (27).

Consider the Riccati equation,

� = A�A� + (G − A�C�)

× (�0 − C�C�)−1(G − A�C�)�, (33)

and write P∞ = limt→∞ Pt and P−∞ = limt→∞P−t .

Proposition 4 (Faurre, 1976). The matrices P∞ and P −1−∞ are,respectively, stabilizing and anti-stabilizing solutions to theRiccati equation (33).

The solutions to the non-steady state Riccati equations (28)and (29) Pt and P−t , are monotonically increasing. In fact wehave the following lemma.

Lemma 1. The matrices Pt and P−t satisfy

0 = P0 � · · · � Pt � Pt+1 � · · · � P∞, (34)

0 = P0 � · · · � P−t � P−t−1 � · · · � P−∞. (35)

Proof. See Appendix A.1.

It should be noted that we have

Q�P∞Q�� = � = Q−�

� P−∞Q−1�

from (8), (16) and (33).

3.3. Minimum phase properties

We now study minimum phase properties of the finite-interval realization. In terms of Pt , we can define matrices

Rt := �0 − CPtC�, (36)

Kt := (G − APtC�)(�0 − CPtC

�)−1 (37)

for t = 0, 1, . . . , since we see from (30) that Rt and Kt in (36)and (37) coincide with R� and K� in (20) and (21) at t = �,respectively.

Define transfer functions

Wj (z) = (C(zI − A)−1Kj + I )R1/2j (38)

for j = 0, 1, . . . . We prove that W�(z) is of minimum phase,showing that Wj (z) is also of minimum phase for j =0, 1, . . . ,

4 We use covariance matrices (32) only for proving that Wt (z) (t =0, 1, . . .) is of minimum phase in Section 3.

Page 6: Minimum phase properties of finite-interval stochastic realization

1500 H. Tanaka, T. Katayama / Automatica 43 (2007) 1495–1507

i.e. the following matrix is stable (t = 0, 1, . . .):

Ft = A − KtC. (39)

We define a set of solutions to the LMI (31) as P. There existthen � and � such that ��P �� for any P ∈ P. The matricesP∞ and �, furthermore, satisfy P∞ =� and �= P −1−∞ (Faurre,1976). Define a matrix as

�t := � − Pt . (40)

It follows from (34) that �t ��t+1 holds. The matrix �∞moreover satisfies �∞=�−P∞ > 0, since {�t }t �1 is coercive(Katayama, 2005; Lindquist & Picci, 1996a). These facts implythat �t is non-singular from

�t ��∞ > 0. (41)

By using �t , we obtain the main theorem.

Theorem 1. The matrix �t satisfies a Lyapunov inequality

�−1t �F�

t �−1t Ft + C�R−1

t C. (42)

Thus, Ft is stable for t = 0, 1, . . . .

Proof. See Appendix A.2.

From Theorem 1, Wj (z) in (38) is of minimum phase forj = 0, 1, . . . , indicating that W�(z) in (23) is also of minimumphase. We have therefore proved that the finite-interval bal-anced realization algorithm (Maciejowski, 1996) gives W�(z)

of minimum phase. It should be pointed out that the whole ar-gument here is crucially based on the positive realness of therealization (A, G, C, �0), which stems from (ii)(a) � > n, andonly under this assumption the LMI (31) has solutions. Underthe assumption (ii)(b) ��n, thus, the matrices A and A − KtC

can be unstable.

4. A realization algorithm via block LQ decomposition

Under the assumption (ii)(a), we develop a new algorithm fora finite-interval balanced realization. We then discuss minimumphase properties of the model obtained by the algorithm, byusing the results of Section 3.

4.1. Time-varying realization

Assuming that {yt ∈ Yp×∞|t =0, 1, . . . , 2�−1} is available,we define past and future data as

Y−t :=

⎡⎢⎢⎣

yt−1yt−2

...

y0

⎤⎥⎥⎦ , Y+

s :=

⎡⎢⎢⎣

ys

ys+1...

y2�−1

⎤⎥⎥⎦ (43)

for t = 1, . . . , 2� and s = 0, . . . , 2� − 1. We then have H� =〈Y+

� , Y−� 〉, ��=〈Y+

� , Y+� 〉 and �t =〈Y−

t , Y−t 〉 for t =1, . . . , 2�.

Proposition 5 (Van Overschee & De Moor, 1996). The non-steady state Kalman filter state estimate xt defined by the re-cursive formula

xt+1 = Axt + Kt (yt − Cxt ) (44)

with x0 = 0 is explicitly written as xt = Ct�−1t Y−

t for t =1, . . . , 2�. Thus, Pt = 〈xt , xt 〉 holds from (26).

Define variables vt := yt − E(yt |Y−t ) for t = 1, . . . , 2� − 1

with v0 := y0. A finite-interval realization is hence derivedfrom the non-steady state Kalman filter (44).5

Proposition 6 (Lindquist & Picci, 1996a, 1996b). The tail ma-trix yt ∈ Yp×∞ (t = 0, 1, . . . , 2� − 1) is realized by a time-varying system[

xt+1yt

]=

[A Kt

C I

] [xt

vt

], x0 = 0, (45)

〈vt , vs〉 = Rtts (46)

for t = 0, . . . , 2� − 1 and s = 0, . . . , 2� − 1.

4.2. Block LQ decomposition of a data matrix

We derive a block LQ decomposition of a data matrix. Tothis end, we define matrices as

Li,j :={

CAi−j−1Kj (i > j),

Ip (i = j),(47)

L+0 :=

⎡⎣

L0,0 0...

. . .

L2�−1,0 · · · L2�−1,2�−1

⎤⎦ , (48)

R+0 := block-diag(R0, . . . , R2�−1).

Lemma 2. The matrix Y+0 in (43) has a decomposition

Y+0 = L+

0 V +0 , 〈V +

0 , V +0 〉 = R+

0 , (49)

where V +0 = [v�

0 v�1 · · · v�

2�−1]� ∈ Y2�p×∞.

Proof. We can easily derive (49) from (45) and (46). �

Partition L+0 and R

+0 as

[L−

� 0

S� L+�

]:= L

+0 ,

[R−

� 0

0 R+�

]:= R

+0 , (50)

5 Proposition 6 is derived as follows: define yt := E(yt |Y−t ). We have

yt=yt+vt , 〈yt , vt 〉=0, yt=Cxt , and hence (45) and 〈vt , vt 〉=�0−〈yt , yt 〉=Rt .We have 〈vs , vt 〉 = 0 (s = t) from definition of vt .

Page 7: Minimum phase properties of finite-interval stochastic realization

H. Tanaka, T. Katayama / Automatica 43 (2007) 1495–1507 1501

where L−� , S�, L

+� , R−

� , R+� ∈ R�p×�p. Define

Jp,t :=⎡⎣ Ip

TIp

⎤⎦ ∈ Rpt×pt , (51)

where Jp,t satisfies Jp,t = J�p,t and Jp,tJp,t = Ipt . Writing

variables as

V −� :=

⎡⎢⎢⎣

v�−1v�−2

...

v0

⎤⎥⎥⎦ , V +

� :=

⎡⎢⎢⎣

v�v�+1

...

v2�−1

⎤⎥⎥⎦

and defining L−� := Jp,�L−

� Jp,� and S� := S�Jp,�, we have ablock LQ decomposition of the past and the future.

Theorem 2. The past Y−� and the future Y+

� of (43) are de-composed as[Y−

�Y+

]=

[L−

� 0S� L+

] [V −

�V +

], (52)

where V −� and V +

� satisfy⟨[

V −�

V +�

],

[V −

�V +

]⟩=

[R−

� 00 R+

], (53)

where R−� := Jp,�R−

� Jp,�. Furthermore, the orthogonal pro-jection of the future onto the past is written as

E(Y+� |Y−

� ) = S�V−� . (54)

Proof. Evident from the above discussion. �

In the early study (Tanaka & Katayama, 2006), we have re-visited a stochastic realization problem, and derived an algo-rithm via a block LQ decomposition for � → ∞. The derivationis based on the steady state Kalman filter, and the expressionin Theorem 2 is consistent with the early results, i.e.

lim�→∞ Li,j = CAi−j−1K , (55)

lim�→∞ R

−� = lim

�→∞ R+� = block-diag(R, R, . . .). (56)

The present result, however, gives the block LQ decomposi-tion of the finite data matrix in more detail; (55) and (56) arereplaced by (47) and

R−� = block-diag(R�−1, . . . , R0), (57)

R+� = block-diag(R�, . . . , R2�−1). (58)

These equations will be useful for deriving a new subspaceidentification method in Section 5.

4.3. A finite-interval balanced realization algorithm

We derive a new algorithm for the finite-interval balancedrealization. In terms of block elements of Kt in (47) and Lij

in (48), define matrices as

F� := [K�−1 AK�−2 · · · A�−1K0], (59)

K�+1 := [K0 K1 · · · K�], (60)

T�+1 :=

⎡⎢⎢⎣

L1,0 L2,1 · · · L�+1,�

L2,0 L3,1 · · · L�+2,�...

......

L�−1,0 L�,1 · · · L2�−1,�

⎤⎥⎥⎦ . (61)

From (48), (50) and S� = S�Jp,�, we can write S� as

S� =

⎡⎢⎢⎣

L�,�−1 L�,�−2 · · · L�,0

L�+1,�−1 L�+1,�−2 · · · L�+1,0...

......

L2�−1,�−1 L2�−1,�−2 · · · L2�−1,0

⎤⎥⎥⎦ ,

and we have the following result.

Lemma 3. The block matrix S� has rank n, and the decompo-sition

S� = O�F�. (62)

Furthermore, the matrix T�+1 has the decomposition

T�+1 = O�−1K�+1. (63)

Proof. See Appendix A.3.

We give a finite-interval balanced realization by means ofthe SVD of the weighted S�.

Lemma 4. For S�, R−� and ��, the SVD of the weighted S�

is expressed as

(��)−1/2S�(R

−� )1/2 = U���V

�� , �� ∈ Rn×n, (64)

where U� and �� are given by the SVD (13), and V� satisfiesV �

� V� = I . Thus, the matrices O� and F� are given by

O� = (��)1/2U��

1/2� , F� = �

1/2� V �

� (R−� )−1/2. (65)

Note that O� derived above is equal to (14).

Proof. See Appendix A.4.

Summarizing the above results, a new algorithm for thefinite-interval realization is obtained as follows.

Finite-interval realization via block LQ decomposition.Step 1: Given Y−

� and Y+� , compute the decomposition (52)

with (53),6 and obtain L−� , L+

� , S�, R−� and R+

� .Step 2: Compute �� = 〈Y+

� , Y+� 〉 and the SVD in (64), and

determine then O� as (65).Step 3: Compute A and C from (18) and (19), respectively.

6 We may compute this decomposition from (49), using (50).

Page 8: Minimum phase properties of finite-interval stochastic realization

1502 H. Tanaka, T. Katayama / Automatica 43 (2007) 1495–1507

Step 4: Define T�+1 as (61), and determine Kt (t =0, 1, . . . , �) from K�+1 = O†

�−1T�+1 and (60). Determine Rt

(t = 0, 1, . . . , �) from (57) and (58).The quadruple (A, Kj , C, Rj ) obtained by the above algo-

rithm gives the finite-interval realization (45) of yj for j =0, 1, . . . , �, and coincides with the finite-interval balanced real-ization (A, K�, C, R�) at j = �. The realization (A, K�, C, R�)

computed by the above algorithm thus gives a stable minimumphase model for (A, K, C, R) without solving Riccati equa-tions under the assumption (ii)(a).7

Define a spectral density function as

Υj (z) = Wj (z)W�j (z−1),

where Wj (z) is given in (38). We see that W�(z) for large �is a good approximation to the minimum phase spectral factorW (z) of Υ (z),8 since Υj (z) converges to Υ (z) when j → ∞and Wj (z) is a stable minimum phase system from the resultsof Section 3. It should be noted that the difference betweenW (z) and W�(z) decreases exponentially as a function of �,according to

∏�j=0(A − KjC), and that counter-examples can

be constructed if A−KChas eigenvalues close to the unit circle(De Moor, 2003).

5. Extension to a subspace identification method

We adapt the finite-interval realization to (iii) a finite stringof time-series data

{y0, y1, . . . , y +2�−2}, (66)

where and � are large. We compute covariance matrices ap-proximately from �i−j ≈ �i,j := (1/ )

∑ −1t=0yi+t y

�j+t . Define

a matrix

yt := [yt yt+1 · · · yt+ −1] ∈ Rp×

for t = 0, 1, . . . , 2� − 1. Inexact covariance matrices are thencomputed from �i,j = (1/ )yi y

�j . Also define a matrix as

Y+0 = [y�

0 y�1 · · · y�

2�−1]� ∈ R2�p× .

We have a stochastic subspace identification algorithm based onthe assumption (1/ )(Y+

0 )(Y+0 )� > 0, adapting the block LQ

decomposition of (49) to the matrix Y+0 .

7 The algorithm does not stop computing even under the assumption(ii)(b), and may compute approximate realizations. This fact is one of theadvantages of this algorithm, while there is a possibility that the results maybe different from what is expected.

8 We choose W�(z) as an approximation to W (z) among Wj (z), j =0, 1, . . . , since matrices Kj (j = �+ 1, �+ 2, . . .) are not always computablefrom (47); in fact we have

Kj = O†�−1[L�

j+1,j L�j+2,j · · · L�

j+�−1,j ]�,

but Li,j (i = 2�, 2� + 1, . . .) are not given in L+0 in (48).

Stochastic Subspace Identification Algorithm.Step 1: Given Y+

0 , we compute the decomposition9

Y+0 =

⎡⎣

L0,0 0...

. . .

L2�−1,0 · · · L2�−1,2�−1

⎤⎦

⎡⎣

v0...

v2�−1

⎤⎦ = L+

0 V +0 ,

where vj satisfies the orthogonality condition,

R+0 = 1

(V +

0 )(V +0 )� = block-diag(R0, . . . , R2�−1),

where Li,j ∈ Rp×p, Li,i = Ip and V +0 ∈ R2p�× .

Partition L+0 and R+

0 as

L+0 =

[L

−� 0

S� L+�

], R+

0 =[

R−� 0

0 R+�

],

where L−� , L+

� , S� ∈ R�p×�p and R−� , R+

� ∈ R�p×�p. Definemoreover a matrix as �� := (1/ )(Y+

� )(Y+� )�, where Y+

� isgiven by Y+

� := [y�� · · · y�

2�−1]�.Step 2: Compute the SVD of the weighted S� as

�−1/2� S�(R

−� )1/2 = U���V

�� , �� ∈ Rn×n.

Define matrices as O� := (��)1/2U��

1/2� , O�−1 := O�(1 :

p(� − 1), :) and O↑� := O�(p + 1 : p�, :).

Step 3: Compute A and C from

A = O†�−1O

↑� , C = O�(1 : p, :).

Step 4: Determine K� and R� as

K� = O†�−1[ L�

�+1,� L��+2,� · · · L�

2�−1,� ]�,

R� = R+0 (p� + 1 : p(� + 1), p� + 1 : p(� + 1)).

Define a spectral density function as

Υ�(z) := W�(z)W�� (z−1),

where W�(z) is given by

W�(z) := (C(zI − A)−1K� + I )R1/2� .

From �i,j → �i−j ( → ∞), we have Υ�(z) → Υ�(z) ( →∞) and Υ�(z) → Υ (z) (� → ∞). We thus use Υ�(z) as anestimate of Υ (z) for large and �, and we see that the presentstochastic subspace identification method is based on the finite-interval realization via block LQ decomposition at j = �, i.e.the realization of W�(z).10

9 We can easily compute this decomposition from the standard LQdecomposition.

10 We have also presented a subspace identification algorithm (Tanaka &Katayama, 2006) based on the fact that R and K are, respectively, obtained

by R−� (1 : p, 1 : p) in (57) and F�(:, 1 : p) in (59) with � → ∞. The earlier

algorithm (Tanaka & Katayama, 2006) thus turns out to be based on W�−1(z)

from Section 4, so that the present algorithm based on W�(z) possibly givesbetter models than the earlier algorithm.

Page 9: Minimum phase properties of finite-interval stochastic realization

H. Tanaka, T. Katayama / Automatica 43 (2007) 1495–1507 1503

Adapting the finite-interval balanced realization via the CCA(Maciejowski, 1996) to a finite string of time-series data, wehave also a stochastic subspace identification method via theCCA (see “Stochastic Balanced realization—Algorithm A” inSection 8.5 of Katayama, 2005). One of the differences be-tween the subspace identification algorithms via the CCA andblock LQ decomposition is that the later estimates innovationvj in the first step of the algorithm.11 It should be mentionedthat, since only sample statistics are available, the stochasticsubspace identification algorithm either via the block LQ de-composition or via the CCA guarantees neither stability norminimum phase.12

6. Numerical simulations

We give numerical simulation results in order to explainfinite-interval realization and stochastic subspace identification,considered in this paper.

6.1. Finite-interval realization

We show numerical simulations of finite-interval realization.For a finite covariance sequence (10) where � > n, we give twosimulation results showing that Wj (z) is of minimum phase,and how Wj (z) approaches to W (z) as j increases.

Example 1. Assume that a spectral density function of yt isgiven by (9), where W (z) is a fifth-order system,

W (z) = WN1(z)/WD(z), (67)

WN1(z) = 0.0551 + 0.0275z−1 + 1.0 × 10−3z−5,

WD(z) = 1 − 2.3443z−1 + 3.0810z−2 − 2.5274z−3

+ 1.2415z−4 − 0.3686z−5.

Suppose that W (z)=C(zI−A)−1B+D is a minimal realizationwith A ∈ Rn×n (n = 5) and � = 12. In order to compute theexact covariance sequence (10), we solve a Lyapunov equationP = APA� + BB� and determine �0 = CPC� + DD� and�j = CAj−1(APC� + BD�).

We obtain Wj (z) (j = 0, . . . , �) in (38), where (A, G, C) iscomputed through Steps 1–3 of the finite-interval balanced re-alization algorithm in Section 2.4, and where Rj and Kj arecalculated from (28), (36) and (37). From a numerical compu-tation, we can easily confirm that all Wj (z) are of minimumphase for j = 0, 1, . . . , �.

11 See Tanaka, ALMutawa, and Katayama (2005) for one of the practicalapplications due to the benefit of this fact.

12 It is possible to guarantee these properties by solving Riccati equations(Tanaka & Katayama, 2005).

0 0.5 1 1.5 2 2.5 3-60

-50

-40

-30

-20

-10

0

10

Frequency

Gai

n (d

B)

Fig. 1. Bode plots of W (z) and Wj (z) for j = 0, . . . , � where the solid and

dotted lines denote W (z) and Wj , respectively.

Bode plots of W (z) and Wj (z) are shown in Fig. 1, wherewe see that Bode plot of Wj (z) approaches to the plot of W (z)

as j increases. The transfer function W�(z) is hence a goodapproximation to W (z) in this example.

Example 2. Assume that a spectral density function of yt isgiven by (9), where W (z) is also a fifth-order system,

W (z) = WN2(z)/WD(z), (68)

WN2(z) = 8.0 × 10−3 + 0.0275z−4 + 0.0551z−5,

which has zeros near the unit circle in the complex plane. Exactcovariance matrices are computed for �=12, and it is also easilyconfirmed that the estimates Wj (z) are of minimum phase forj = 0, 1, . . . , �.

Bode plots of W (z) and Wj (z) are shown in Fig. 2, whereBode plot of Wj (z) approaches to the plot of W (z) as j in-creases. With respect to the magnitudes on the frequency around0.7, W�(z) is not a good approximation, though it is the bestapproximation to W (z) among Wj (z) (j = 0, . . . , �= 12). Thedifference between Bode plots of W (z) and W�(z) comes fromA − K∞C = A − K�C and R∞ = R� (� < ∞), i.e. the differ-ence between the steady and non-steady state Kalman filters. Inthis example, we should take larger � to obtain better estimatesregarding the frequency around 0.7.

From Examples 1 and 2, we see that W�(z) gives a goodapproximation to W (z), if W (z) does not have zeros near theunit circle.

6.2. Stochastic subspace identification

We give two simulation results of the stochastic subspaceidentification algorithm via the block LQ decomposition, based

Page 10: Minimum phase properties of finite-interval stochastic realization

1504 H. Tanaka, T. Katayama / Automatica 43 (2007) 1495–1507

0 0.5 1 1.5 2 2.5 3-60

-50

-40

-30

-20

-10

0

10

Frequency

Gai

n(dB

)

Fig. 2. Bode plots of W (z) and Wj (z) for j = 0, . . . , � where the solid and

dotted lines are W (z) and Wj , respectively.

on the assumption that a finite string of time-series data (66) isavailable.

Example 3. Time-series data are generated by a system

yt = W (z)et , (69)

where et is a white noise with zero mean and unit variance, andW (z) is given by (67). We estimate the system for 30 simulationruns carried out with different noise realizations for � = 12, = 5000 and n = 5.

Bode plots of W�(z) estimated by means of the present sub-space identification method and the true spectral factor W (z)

are shown in Fig. 3. Bode plots of W�(z) are clustered aroundthat of W (z).

The algorithm never stops computing under the assumption(1/ )(Y+

0 )(Y+0 )� > 0, though the present algorithm gives 28

stable, minimum phase models. Numerical simulations showthat A and A−K�C are not always stable for < ∞, though wehave often obtained minimum phase models when n= n, wherewe see that the result is reasonable from Theorem 1. Instabilityapparently stems from inexact coefficients, i.e. �i,j = �i−j

( < ∞). It may be also noted that, for example, for � = 12, = 5000 and n = n = 7, we have only 18 stable, minimumphase models.

Example 4. Time-series data are generated by (69), whereW (z) is given by (68). We also estimate systems for 30 sim-ulation runs carried out with different noise realizations (� =12, = 5000, n = 5). The present algorithm gives 30 stable,minimum phase models.

Bode plots of estimates W�(z) and the true spectralfactor W (z) are shown in Fig. 4. We do not have good

0 0.5 1 1.5 2 2.5 3-60

-50

-40

-30

-20

-10

0

10

Frequency

Gai

n (d

B)

Fig. 3. Bode plots of W (z) and W�(z), where the dashed and dotted linesare W (z) and W�, respectively.

0 0.5 1 1.5 2 2.5 3-60

-50

-40

-30

-20

-10

0

10

Frequency

Gai

n (d

B)

Fig. 4. Bode plots of W (z) and W�(z), where the dashed and dotted linesare W (z) and W�, respectively (� = 12).

approximations with respect to the magnitudes on the fre-quency around 0.7. In this example, � should be larger inorder to obtain better estimates; Fig. 5 shows the Bode plotsfor � = 30 ( = 4964, n = 5), and we see that W�(z) givesbetter estimates than the one in Fig. 4 on the frequencyaround 0.7.

From Examples 3 and 4, we also see that W�(z) gives goodapproximations to W (z), if W (z) does not have zeros near theunit circle, and if � > max{n, n} and is large. It should be againnoted that we can apply the block LQ decomposition algorithmto the data which do not satisfy � > n; however, potential usersshould keep in mind that the results might be different from

Page 11: Minimum phase properties of finite-interval stochastic realization

H. Tanaka, T. Katayama / Automatica 43 (2007) 1495–1507 1505

0 0.5 1 1.5 2 2.5 3-60

-50

-40

-30

-20

-10

0

10

Frequency

Gai

n (d

B)

Fig. 5. Bode plots of W (z) and W�(z), where the dashed and dotted linesare W (z) and W�, respectively (� = 30).

the expected one depending on the number of available data, nand �.

7. Concluding remarks

In this paper, we have analyzed a finite-interval stochasticallybalanced realization based on the idealized assumption that anexact finite covariance sequence is available, and shown that thefinite-interval stochastically balanced realization (Maciejowski,1996) provides stable minimum phase models if � > n. We thenderived a finite-interval realization algorithm and a stochasticsubspace identification method based on block LQ decompo-sition. We also considered minimum phase properties of themodels obtained by these algorithms, and showed some simu-lation results.

Acknowledgments

We appreciate constructive suggestions from anonymous re-viewers.

Appendix A. Proofs of lemmas and theorems

A.1. Proof of 1

We prove (34) recursively, by using (26). For t =0, we haveP1 =G�−1

0 G� �0= P0. For t �1, the matrices Ct+1 and �t+1are written as

Ct+1 = [Ct AtG],

�t+1 =[

�t Jp,tOtG

(Jp,tOtG)� �0

],

where Jp,t is defined as (51). It follows from Schur complementthat

�−1t+1 =

[I −�−1

t Jp,tOtG

0 I

] [�−1

t 0

0 R−1−t

]

×[I −�−1

t Jp,tOtG

0 I

]�,

where R−t := �0 − (Jp,tOtG)��−1t (Jp,tOtG) and we have

R−t > 0 from �t+1 > 0. From (26), we therefore have

Pt+1 − Pt = (−Ct�−1t Jp,tOtG + AtG)R−1−t

× (−Ct�−1t Jp,tOtG + AtG)� �0.

We can similarly prove (35). We thus proved Lemma 1.

A.2. Proof of Theorem 1

Since � is the anti-stabilizing solution to the Riccati equa-tion (33), � satisfies

� = A�A� + (G − A�C�)

× (�0 − C�C�)−1(G − A�C�)�. (A.1)

Define matrices as

R := �0 − C�C�, (A.2)

K := (G − A�C�)(�0 − C�C�)−1. (A.3)

Substituting (A.2) and (A.3) into (A.1), and (36) and (37) into(28), we have Lyapunov equations:

� = A�A� + KRK�, (A.4)

Pt+1 = APtA� + Kt Rt K

�t . (A.5)

From (A.4), (A.5) and (40), we have

�t+1 = A�tA� + KRK� − Kt Rt K

�t . (A.6)

By using (A.2), (A.3), (36), (37) and (40) again, we obtain

(K − Kt )R(K − Kt )�

= KRK� + Kt RK�t − KRK�

t − Kt RK�

= KRK� + (Kt Rt K�t − KtC�tC

�K�t )

− (G − A�C�)K�t − Kt (G − A�C�)� (A.7)

= KRK� − Kt Rt K�t − KtC�tC

�K�t

+ (A�tC�)K�

t + Kt (A�tC�)�, (A.8)

where we have used Kt Rt K�t =−Kt Rt K

�t +(G−APtC

�)K�t +

Kt (G−APtC�)�and (40) in order to derive (A.8) from (A.7).

Page 12: Minimum phase properties of finite-interval stochastic realization

1506 H. Tanaka, T. Katayama / Automatica 43 (2007) 1495–1507

From (A.6), (A.8) and (39), we thus have

�t+1 = A�tA� + (K − Kt )R(K − Kt )

− (A�tC�)K�

t − Kt (A�tC�)�

+ KtC�tC�K�

t

= Ft�tF�t + (K − Kt )R(K − Kt )

�.

By using Ft�tC� = (Kt − K)(�0 − C�C�),13 we have

�t+1 = Ft�tF�t

+ Ft�tC�(�0 − C�C�)−1C�tF

�t . (A.9)

We next prove (42). Since �t+1 is non-singular, so is Ft from(A.9). We thus have, from (A.9) and (40),

F−1t �t+1F

−�t

= �t + �tC�(�0 − CPtC

� − C�tC�)−1C�t .

By applying the matrix inversion lemma to this equation, weobtain

F�t �−1

t+1Ft = �−1t − C�(�0 − CPtC

�)−1C.

We have (42) from �−1t ��−1

t+1 and (36), showing that Ft is

stable from Rt > 0. We have thus proved Theorem 1.

A.3. Proof of Lemma 3

Eqs. (62) and (63) are easily derived from (47).We show F� has full rank. Consider the time-varying Lya-

punov equation (A.5) with P0=0. The solution to the Lyapunovequation (A.5) is given by Pt+1 = ∑t

j=0AjKt−j Rt−j K

�t−j

(Aj )�. From (30) we hence have

F�R−� F

�� = �� > 0, (A.10)

and rank F� =n. The rank of S� is n, since O� and F� have fullrank from (14) and (A.10).

A.4. Proof of Lemma 4

From H� =O�C� and Proposition 5, we have the orthogonalprojection

E(Y+� |Y−

� ) = H��−1� Y−

� = O�x�,

and thus 〈H��−1� Y−

� , H��−1� Y−

� 〉=〈S�V−� , S�V

−� 〉 holds from

(54). By using �� = 〈Y−� , Y−

� 〉, R−� = 〈V −

� , V −� 〉 and (13),

13 This equation is derived from

Ft PtC� = (A − KtC)PtC

� = G − Kt�0,

Ft �C� = G − (Kt − K)C�C� − K�0,

which are given by (39).

we have

H��−1� H�

� = S�R−� S

�� = (��)

1/2U��2�U

�� (��)

�2 , (A.11)

where U� and �� are obtained by the SVD (13).We have the SVD (64) from (A.11). We thus have S� =

(��)1/2U���V

�� (R−

� )−1/2, and the matrices O� and F� aregiven by (65) from (62). The matrix O� in (65) is equal to theone in (14) from (A.11).

References

Akaike, H. (1974). Stochastic theory of minimal realization. IEEETransactions on Automatic Control, 19(6), 667–674.

Akaike, H. (1975). Markovian representation of stochastic processes bycanonical variables. SIAM Journal on Control, 13(1), 162–173.

Aoki, M. (1990). State space modeling of time series. (2nd ed.), Berlin:Springer.

Dahlén, A., Lindquist, A., & Mari, J. (1998). Experimental evidence showingthat stochastic subspace identification methods may fail. Systems & ControlLetters, 34(5), 303–312.

De Moor, B. (2003). On the number of rows and columns in subspaceidentification methods. In Preprints of the 13th IFAC symposium on systemidentification (SYSID 2003) (pp. 1796–1801).

Desai, U. B., & Pal, D. (1984). A transformation approach to stochastic modelreduction. IEEE Transactions on Automatic Control, 29(12), 1097–1100.

Desai, U. B., Pal, D., & Kirkpatrick, R. D. (1985). A realization approachto stochastic model reduction. International Journal of Control, 42(4),821–838.

Faurre, P. L. (1976). Stochastic realization algorithms. In R. Mehra, & D.Lainiotis (Eds.), System identification: Advances and case studies (pp. 1–25). New York: Academic Press.

Faurre, P., Clerget, M., & Germain, F. (1979). Opérateurs Rationnels Positifs.Paris: Dunod.

Gevers, M. (2006). A personal view on the development of systemidentification—a 30-year journey through an exciting field. IEEE ControlSystems Magazine, 26(6), 93–105.

Katayama, T. (2005). Subspace methods for system identification. Berlin:Springer.

Lindquist, A., & Picci, G. (1996a). Canonical correlation analysis, approximatecovariance extension, and identification of stationary time series.Automatica, 32(5), 709–733.

Lindquist, A., & Picci, G. (1996b). Geometric methods for state spaceidentification. In: S. Bittanti, & G. Picci (Eds.), Identification, adaptation,learning (pp. 1–69). Berlin: Springer.

Maciejowski, J. M. (1996). Parameter estimation of multivariable systemsusing balanced realizations. In: S. Bittanti, & G. Picci (Eds.), Identification,adaptation, learning (pp. 70–119). Berlin: Springer.

Tanaka, H., & Katayama, T. (2003). Stochastic realization on a finite intervalvia “LQ decomposition” in Hilbert space. In Proceedings of Europeancontrol conference (ECC 2003).

Tanaka, H., & Katayama, T. (2004). A stochastically balanced realization ona finite-interval. In Proceedings of the 16th international symposium onmathematical theory of networks and systems (MTNS 2004).

Tanaka, H., & Katayama, T. (2005). Stochastic subspace identificationguaranteeing stability and minimum phase. In Preprints of the 16th IFACWorld congress (IFAC 2005).

Tanaka, H., & Katayama, T. (2006). A stochastic realization algorithm viablock LQ decomposition in Hilbert space. Automatica, 42(5), 741–746.

Tanaka, H., ALMutawa, J., & Katayama, T. (2005). Stochastic subspaceidentification of linear systems with observation outliers. In Proceedingsof the 44th IEEE conference on decision and control and the Europeancontrol conference 2005 (CDC-ECC’05) (pp. 7090–7095).

van der Veen, A.-J., Deprettere, E. F., & Swindlehurst, A. L. (1993). Subspace-based signal analysis using singular value decomposition. Proceedings ofthe IEEE, 81(9), 1277–1308.

Page 13: Minimum phase properties of finite-interval stochastic realization

H. Tanaka, T. Katayama / Automatica 43 (2007) 1495–1507 1507

Van Overschee, P., & De Moor, B. (1993). Subspace algorithms for thestochastic identification problem. Automatica, 29(3), 649–660.

Van Overschee, P., & De Moor, B. (1996). Subspace identification for linearsystems. Dordrecht: Kluwer Academic Publishers.

Hideyuki Tanaka received B.Sc. and M.Sc.degrees in engineering from Kyoto Universityin 1993 and 1995, respectively, and the Ph.D.degree (engineering) from Kyoto University in1999. He has been a member of the Depart-ment of Applied Mathematics and Physics, Ky-oto University, since 1998. His research interestsinclude system identification and robust control.

Tohru Katayama received the B.E., M.E., andPh.D. degrees all in Applied Mathematics andPhysics from Kyoto University, in 1964, 1966,and 1969, respectively. He was a Professor atthe Department of Applied Mathematics andPhysics, Kyoto University from 1986 to 2005,became an Emeritus Professor of Kyoto Uni-versity in 2005, and is now a Professor at theFaculty of Culture and Information Science,Doshisha University. He had visiting positionsat UCLA from 1974 to 1975, and at the Uni-versity of Padova in 1997. He was the Chair

of IFAC Coordinating Committee of Systems and Signals from 2002 to 2005and is now the Vice-Chair of IFAC Technical Board (2005–). His researchinterests includes estimation theory, stochastic realization, subspace methodof identification, blind identification, and control of industrial processes.