18
Computers Math. Applic. Vol. 25, No. 12, pp. 55-72, 1993 0097-4943/93 $6.00 + 0.00 Printed in Great Britain. All rights reserved Copyright©1993 Pergamon Press Ltd GENERALIZED MATRIX EUCLIDEAN ALGORITHMS FOR SOLVING DIOPHANTINE EQUATIONS AND ASSOCIATED PROBLEMS* J. S. H. TSAI, S. S. CHEN Control System Laboratory,Department of Electrical Engineering National Cheng-Kung University, Tainan 70101, Taiwan, R.O.C. L. S. SHIEH Department of Electrical Engineering University of Houston, Central Campus, Houston, TX 77204-4793, U.S.A. (Received and accepted July 199P) Abstract--Based on the matrix fraction descriptions of multi-variable control systems, this paper presents the generalized matrix Euclidean algorithms for solving Diophantine equations and associ- ated problems. A chain rule is developed for finding the greatest common right(left) divisor of two square(non-square) polynomial matrices and the irreducible right(left) matrix fraction description from the reducible or irreducible left(right) matrix fraction description. Also, the development of multi-variable pole-assignment controllers are discussed for engineering applications. 1. INTRODUCTION In the modelling of practical multi-variable control systems, matrix fraction descriptions (MFDs) are often used to express the transfer functions of many physical systems such as aerodynamics and robotic manipulators. Analysis and design of multi-variable control systems via the MFDs can be found in many works [1-8]. In the analysis and synthesis of multi-variable control systems, the concept of co-primeness of the fractions of transfer-function matrices is equivalent to the concepts of controllability and observability in dynamic equations, and the control methodologies based on the MFDs of multi-variable control systems can be viewed as the natural generalization of the methodologies based upon the (scalar) transfer functions of single variable control systems; hence, the development of efficient algorithms for the MFDs and associated problems is essential. One popular problem encountered in the analysis and design of multi-variable control systems is the solutions of the Diophantine equations in which the existence of solutions of the Diophantine equations requires the co-primeness of associated pairs of polynomial matrices [6]. Applications of the canonical Diophantine equations were discussed by Wolovich and Antsaklis [9]. Fundamental relationships between appropriate pairs of polynomial matrices and the corresponding state- space realizations have been developed and utilized to solve general Diophantine equations and associated problems by Wolovich and Antsaklis [9]. When the dimension of the realized system matrix is large, their methods [9,10] required a great amount of computational effort; nevertheless, their works provides a nice link between state-space structures and the associated polynomial matrices. A class of MFDs which contains monic or co-monic [11] matrix polynomials can be represented by various generalized matrix continued-fraction descriptions (MCFDs), such as the MCFDs in the first, second and third Cauer forms [12,13] and in the nested Cauer forms [14-16]. The MCFDs in various Caner forms have successfully been applied to the model simpli- *This work was supported by the National Science Council of the Republic of China under Contract NSC-81- 0404-E-006-516, the NASA-Johnson Space Center under Grant NAG-9-380, and the U.S. Army Research office under Conract DAAL-03-91-G-0106. 55

Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

Computers Math. Applic. Vol. 25, No. 12, pp. 55-72, 1993 0097-4943/93 $6.00 + 0.00 Printed in Great Britain. All rights reserved Copyright©1993 Pergamon Press Ltd

G E N E R A L I Z E D M A T R I X E U C L I D E A N A L G O R I T H M S F O R S O L V I N G D I O P H A N T I N E E Q U A T I O N S

A N D A S S O C I A T E D P R O B L E M S *

J. S. H. TSAI, S. S. CHEN Control System Laboratory, Department of Electrical Engineering National Cheng-Kung University, Tainan 70101, Taiwan, R.O.C.

L. S. SHIEH Department of Electrical Engineering

University of Houston, Central Campus, Houston, TX 77204-4793, U.S.A.

(Received and accepted July 199P)

Abst rac t - -Based on the matrix fraction descriptions of multi-variable control systems, this paper presents the generalized matrix Euclidean algorithms for solving Diophantine equations and associ- ated problems. A chain rule is developed for finding the greatest common right(left) divisor of two square(non-square) polynomial matrices and the irreducible right(left) matrix fraction description from the reducible or irreducible left(right) matrix fraction description. Also, the development of multi-variable pole-assignment controllers are discussed for engineering applications.

1. INTRODUCTION

In the modelling of practical multi-variable control systems, matrix fraction descriptions (MFDs) are often used to express the transfer functions of many physical systems such as aerodynamics and robotic manipulators. Analysis and design of multi-variable control systems via the MFDs can be found in many works [1-8]. In the analysis and synthesis of multi-variable control systems, the concept of co-primeness of the fractions of transfer-function matrices is equivalent to the concepts of controllability and observability in dynamic equations, and the control methodologies based on the MFDs of multi-variable control systems can be viewed as the natural generalization of the methodologies based upon the (scalar) transfer functions of single variable control systems; hence, the development of efficient algorithms for the MFDs and associated problems is essential. One popular problem encountered in the analysis and design of multi-variable control systems is the solutions of the Diophantine equations in which the existence of solutions of the Diophantine equations requires the co-primeness of associated pairs of polynomial matrices [6]. Applications of the canonical Diophantine equations were discussed by Wolovich and Antsaklis [9]. Fundamental relationships between appropriate pairs of polynomial matrices and the corresponding state- space realizations have been developed and utilized to solve general Diophantine equations and associated problems by Wolovich and Antsaklis [9]. When the dimension of the realized system matrix is large, their methods [9,10] required a great amount of computational effort; nevertheless, their works provides a nice link between state-space structures and the associated polynomial matrices.

A class of MFDs which contains monic or co-monic [11] matrix polynomials can be represented by various generalized matrix continued-fraction descriptions (MCFDs), such as the MCFDs in the first, second and third Cauer forms [12,13] and in the nested Cauer forms [14-16]. The MCFDs in various Caner forms have successfully been applied to the model simpli-

*This work was supported by the National Science Council of the Republic of China under Contract NSC-81- 0404-E-006-516, the NASA-Johnson Space Center under Grant NAG-9-380, and the U.S. Army Research office under Conract DAAL-03-91-G-0106.

55

Page 2: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

56 J.S.H. TSAI et al.

fication, [12,17,18], identification [19], stability [15,16,20], design [17], realization [21] and canon- ical model transformations [22] of a class of multi-variable control systems. Computational algo- rithms called the right (left) matrix Routh algorithms (matrix Euclidean algorithms) [23] have been developed for the construction of the MCFDs, and applications have been made to determine the greatest common divisor [15,16,24] of two polynomial matrices, transformation between right and left irreducible MFDs [24], and the inversion of non-square matrix polynomials [25]. Based upon the generalized matrix Euclidean algorithms utilized in the formulations and applications of the MCFDs, this paper developed a chain rule to solve the Diophantine equations coupled with associated problems and to determine multi-variable controllers described by the MFDs.

The paper is organized as follows. In Section 2, we review some terminologies on the matrix polynomials and polynomial matrices. The generalized matrix continued-fraction methods are reviewed and utilized as basic tools for the analysis. In Section 3, new methods for determining the greatest common right(left) divisors are proposed via the generalized matrix Euclidean algo- rithms utilized in the matrix continued-fraction methods. Algorithms for solving the Diophantine equations and the conversion of irreducible or reducible right(left) MFDs to irreducible left(right) MFDs are developed in Section 4. In Section 5, the case of non-square matrix polynomials for the generalized matrix continued-fraction methods are considered. Associated modified algorithms for the results in Sections 3 and 4 are also discussed. In Section 6, we utilize the established results to obtain the multi-variable pole-assignment controllers. Moreover, illustrative examples are given in Section 7, and results are summarized in Section 8.

2. PRELIMINARIES

For completeness, we review some terminologies on the matrix polynomials and polynomial matrices in the following definitions.

DEFINITION 1. A(s) P = ~i=0 Ai+ lsi is called a matrix polynomiM [11] of degree p, where As E C mxm and s is a complex variable. When Ap+l is the identity matrix and a non-singular matrix, the matrix polynomial is sa/d to be monic and regular [6], respectively. Also, when Ai is the identity matrix and a non-singular matrix, the matrix polynomial is called co-monic [11] and co-regular, respectively. |

DEFINITION 2. D(s) : {di,j(s) } is called a non-singular polynomial matrix where det[D(s)] ¢ 0 and d,,j(s) is a (scalar) polynomial in s, located at the ith row and jth column in D(s). The D(s) is said to be row-reduced (colunm-reduced) [6] if the highest row-degree coefficient matrix (the h/ghest column-degree coefficient matrix) [4] is non-singular. In addition, it is defined as co-row- reduced (co-column-reduced) i£ the lowest row-degree coefficient matrix (the lowest column-degree coefficient matrix) is non-singular. |

It has been shown [2,4,6,7] that, for a non-singular m x m polynomial matrix A(s), there exists a non-singular left polynomial matrix P(s) (right polynomial matrix Q(s)) such that -A(s) = P(s)A(s) CA(s) = A(s)Q(s)), where A(s) (A(s)) is either a row-reduced (column-reduced) matrix polynomial or a co-row-reduced (co-column-reduced) matrix polynomial. A computational algorithm for determining Pi(s) from Ai(s) is available in the work of Kishnarao and Chen [26].

With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively.

ALGORITHM 2.1. THE GENERALIZED RIGHT MATRIX CONTINUED-FRACTION ALGORITHM.

STEP 1. Initiate with two polynomial matrices, Al(S) and A2(s). Choose left polynomial matri- ces Pl(s) and P2(s) to transform Al(s) and A2(s) to co-row-reduced matrix polynomials Al(s) and A~.(s), respectively. (If Al(s) and A2(s) are already co-row-reduced matrix polynomials, Pl(S) and P2(s) can be identity matrices.) Ai(s) is denoted to be Ai,1 + Ai,2s + Ai,3s 2 + ..., for/--- 1,2.

STEP 2. Determine

Hi = A, 1Ai~l, 1,

Ai+2(s) = As(s) - H~Ai+l(S) = Pi+2(s)A~+2(s),

(la) (lb)

Page 3: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

Diophantine equations 57

for i = 1, 2 , . . . , till AT(s), where A~(s) is either a singular or a null polynomial matrix. The left polynomial matrix Pi+2 (s) transforms a non-singular polynomial matrix Ai+2(s) to a co-row- reduced matrix polynomial Ai+2(s), and Ai+2(s) is denoted to be Ai+2,1 +A~+2,2s+Ai+2,3s2+ . . . for i -- 1 , 2 , . . . , w - 3. |

ALGORITHM 2.2. THE GENERALIZED LEFT MATRIX CONTINUED-FRACTION ALGORITHM.

STEP 1. Initiate with two polynomial matrices, Bl(S) and B2(s). Choose right polynomial ma- trices Ql(s ) and Q2(s) to transform Bl(S) and B2(s) to co-column-reduced matrix polynomials B1 (s) and B2 (s), respectively. (If B1 (s) and B2 (s) are already co-column-reduced matrix polyno- mials, Ql(s ) and Q2(s) can be identity matrices.) Bi(s) is denoted to be Bi,~ +Bi ,2s+Bi ,3s2+ .. ., for i = 1,2.

STEP 2. Determine

Mi = B~'~_I,IBi,1, (2a)

Bi+2(s) = Bi(s) - B~+I(s)M~ = B~+2(s)Qi+2(s), (2b)

for i = 1, 2 , . . . , till Bw (s), where Bw (s) is either a singular or a null polynomial matrix. The right polynomial matrix Qi+2 (s) transforms a non-singular polynomial matrix Bi+2 (s) to a co-column- reduced matrix polynomial Bi+2(s), and Bi+2(s) is denoted to be Bi+2,1 -t-Bi+2,2s+Bi+2,3s2+ .. ., for i -- 1 , 2 , . . . , w - 3 . |

Note, that when Pl(S) -- P2(s) = I n (Ql(S) = Q2(s) = In ) and Pi(s) = s i n (Qi(s) = SIn) for i = 3, 4 , . . . , w-- 1, the generalized right(left) matrix continued-fraction algorithm shown above is reduced to the rightileft ) matrix Routh algorithm. In the next section, we use the right(left) matrix continued-fraction algorithm to determine the greatest common right(left) divisor of two polynomial matrices.

3. T H E G R E A T E S T C O M M O N R I G H T ( L E F T ) D I V I S O R

Consider an m-input m-output right MFD,

T(s) = N~(s ) (Dr ) - l ( s ) ,

where Nr ( s ) and D~(s) are polynomial matrices.

(3)

3.1. The Procedures for Determining the Greatest Common Right Divisor of-Nr(s) and Dr(s )

STEP 1. Initiate with Al(S) = D"-~(s) and A2(s) --- N---r(s). Obtain co-row-reduced matrix poly- nomials A1 (s) and A2(s) by non-singular left polynomial matrices Pl(S) and P2(s), respectively, where Al(s ) = PI ( s )AI (S ) , A l ( s ) =- A1,1 + A1,2s + A1,3s 2 + ..-, A2(s) = P2(s)A2(S) and A2(s) -- A2,1 -{- A2,2s + A2,3s 2 + . . . .

STEP 2. Compute Hi = Ai, IA.~I, 1, then determine Ai+2(s) = Ai(s) - HiAi+l(S) and Ai+2(s) -- P~+2(s)A~+2(s) for i = 1 ,2 , . . . , till-AT(s) = On, where Ai+2(s) = A~+2,I+Ai+2,2s+A~+2,3s2+ . . . , f o r / = 1 , 2 , . . . , w - 3.

STEP 3. The greatest common right divisor R(s) is equal to Aw_l(S), i.e.,

R(s) = Aw- l (S ) , (4)

where N r ( s ) = Nr(s )R(s ) , Dr(s) -- Dr(s)R(s ) , and {Nr ( s ) ,Dr ( s ) } is a right co-prime pair. If R(s) is equal to a non-singular constant matrix, then Nr(s) and Dr(s) are already co-prime and we can set R(s) = I m for convenience. |

By utilizing the above result, we can have an irreducible right MFD, T(s ) = Nr ( s )D~ l ( s ) . Similarly, consider an m-input m-output left MFD,

T(s) = ( 9 , ) - 1 (s)N, (s). (5)

Page 4: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

58 J.S.H. TSAI et al.

3. ~. The Procedures for Determining the Greatest Common Left Divisor o] -N l ( s ) and -Dr(s)

STEP 1. Initiate with Sl(S) - - "-Dl(s) and -B2(s) = Nl(s) . Obtain co-column-reduced matri- ces Bl(S) and B2(s) by non-singular right polynomial matrices Ql(S)_and Q2(s), respectively, where Bl(S) = Bl(s)Ql(S) , Bl(S) - BI,1 + B1,2s + B1,3s 2 + "", B2(s) = B2(s)Q2(s) and B2(s) =- B2A + B2,2s + B2,ss 2 + " ".

STEP 2. Compute M~ = B~-~IB~, then determine B~+2(s) = Bi(s) - Bi+t(s)M~ and B~+2 -- Bi+2(s)Q~+2(s) for i = 1, 2, . . . , till Bw(s) = Ore, where B~+2(s) = B~+2,1 + B~+2,2s + Bi+2,3s 2 + . . . , for i = 1 , 2 , . . . , w - 3.

STEP 3. The greatest common left divisor L(s) is equal to B,~_l(s), i.e.,

L ( s ) = B w - 1 ( s ) , (6)

where -Nt(s) = L(s)Ni(s) , -D~(s) = L(s)Dl(s), and {Nl(s), Dt(__s)} is a left co-prime pair. If L(s) is equal to a non-singular constant matrix, then N~(s) and Dl(s) are already left co-prime and we can set L(s) = I m for convenience. |

By utilizing the above algorithm, it is possible to have an irreducible transfer function T(s) = D~l(s)Nt(s) . The most important difference between the above algorithms and those algorithms via the matrix Routh algorithms is that the former considers the general case by mul- tiplying a non-singular matrix Pj(s) or Qj (s) to singular lowest-degree coefficient matrices. The proofs are almost the same as those of established algorithms via the matrix Routh's algorithm [15,16,24] except the introduce of Pj(s) or Qj(s). Note that for regular cases, matrix Routh al- gorithm needs 2m matrix quotients and so as the generalized matrix continued-fraction methods. For the case of singular lowest-degree coefficient matrices, the former cannot be performed and the latter requires less than 2m matrix quotients.

4. T H E D I O P H A N T I N E EQUATIONS AND CONVERSIONS OF R I G H T ( L E F T ) MFD TO L E F T ( R I G H T ) MFD

Consider the right Diophantine equation,

Z(s )Nr(s ) + Y(s)Dr(s) = In , (7)

where Dr(s), Nr(s) • R[s] "~xm are both given polynomial matrices. Im denotes an m x m identity matrix. Nr(s) and Dr(s) are assumed to be right co-prime. The co-primeness of Nr(s) and Dr(s) provides the existence of solutions of the right Diophantine equation [4,6]. The Dio- phantine equations play an important role in the system design and analysis in the frequency domain [3,5,27].

Applying the generalized right matrix Euclidean algorithm initiated with Nr(s) and Dr(s), one can have the following illustrated processes:

"A,(s) - Dr(s) (8a)

A2(s) - Yr(s) (85)

A3(s) = At (s) (8c)

A4(8) --~ A2(8) (8d) As(s) = A3(s) (Be)

A"-6(s) = A4(s) (Sf)

-AT(s) = As(s) (8g)

-As(s) = A6(s) (85)

Since Nr(s) and Dr(s) are right co-prime, AT(s) must be a non-singular constant matrix. As a result, one can assume that the latest 2 nd term Xv(s) is non-singular. From this fact, equation (Sg) can be rewritten in term of As(s) and As(s) as

(-AT) -1 (s)A5(s) - ( 'AT)-I(s)HsAs(s) = In . (9)

= PI(s)AI(s) ,

= P 2 ( s ) A 2 ( s ) ,

- H 1 A 2 ( s ) = P z ( s ) A 3 ( s ) ,

- H2As(s) = P4(s)A4(s),

- H3Aa(s) = Ps(s)As(s),

- g4As(s) = Ps(s)As(s),

- HsAa(s) = P,(s)Ar(s),

- HsAT(S) = Ore.

Page 5: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

Diophantine equations 59

We may notice that equation (9) has the form of equation (7). If we can express {A6(s), As(s)} in term of {A1 (s), A2(s)}, then one can have the right co-prime pair {Dr(s), Nr(s)}.

From equations (8a)-(8c), one has

A3(s) = .41(s) - H I A 2 ( s )

= 1011 is ) [P1 (8)`41 (8)] - IT'll 2D2 -1 (8)[102 (8)`42 (8)] = 1011(8)',A1(8) - H1P21(8)A2(8).

From equations (8a), (8b) and (8d), one has

A4(s) : 1021(s)A2(8) - H'2P31(8)A3(8).

Rearranging equations (10) and (11), we can have

/-/210;-1 (s) I~ g4(s) o~ 10~-1(s) and

A4(8) - 2T-~2 ,~3"- 1 ( 8 ) I ~

-~" [ "ell(s) -H2103-1(s)10Fl(s)

~2(s) " In the similar way, we have

~6(s) -/-/410fl(s)10~l(s)

-A2(s) [ ~3(s) ~4(8) ] '

By equations (13) and (14), we get the chain form

[ As(s) ] = ~2(s) [ ~3(s) A6($) ~4(s) ]

n(s) [ ~l(s) - ~2(s) ] From equation (15), we have

2"1(s) ] ~2(s) '

[ 1011(8) -H11021(s) I [-'Al(s) 1 om 102~(s) ~2(s)

-/-11102-1(8) ] [A1(8) 102-1(S) + ~I21031(s)H'lP2"-l(s) A2(8)

-- ~'~3 P4-1 (s) ]

1041(s) -[- IIT4Ps- 1(s)I-~3P41 (8)

I" A1(8) = A2(s) ,%(s) [ ~2(s) ] [ All(8) A12(8) AI(8)

~21(S) A22(8) ] [ ] L 'A2(s) "

23(s) ] ~4(s)

(10)

(11)

(12)

(13)

(14)

(15)

~5(8) ---- All(8)A1(8 ) "~-A12(8)A2(8), (16a) A6(8) ---~ A21(8)~1(8 ) -~- A22($)A2($ ). (161:))

Back to equation (9), coupled with equations (Sf), (8g), (16a) and (16b), we can derive

(~7)- 1 (8).45 (8) -- ('~7)- 1 (8).H.5.A6 (8) ----- Zm, ::~ CAT) - l (8 )ps" l (8 ) "As (8 ) - (AT)-1($),~sP6-1(8)~4~(s) = Zyr~ ,

(~7)--1(8)105"-1(8) [Al1(8)~[1(8 ) + A12(8)'~'2(8)] -- (~7) --1 (s)HsPe-- 1 (8)[A21 (8)A1 (s) + A22 (S)~2 (S)] = l"m,j

* [(~)-'(s).v~l(s)~ll(S) - (~7)-l(s)~sPo-l(s)~21(s)] ~l(S) + [(~.,.)-l(s)P~l(s)~12(s) - (~,)-l(s)H~V~-l(s)~2(s)] ~2(s) = X~. (17)

Page 6: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

60 J.S.H, TSAI et aL

Comparing equation (17) with equation (7) yields the solution of equation (7) as

Y(s) = KI(s)All(S) + K2(s)A21(s), (18a)

X(s) = K1(s)A12(s) + K2(s)A22(s), (18b)

where Kl(S) = ( 'A7)-l(s)p[l(s) and Ka(s) = -(-AT)-1(s)HsP~l(s). Next, considering the conversion of the right MFD to the left MFD, one has the basic relation,

T(s) = Nr(s)Drl(s) = DII(s)N,(s) , (19a)

o r

Yl(s)Dr(s) - Dl(s)Nr(s) = Ore. (195)

To determine the irreducible polynomial matrices Nl(s) and Dr(s) from Dr(s) and Nr(s), we combine equation (Sg) and equation (Sh) to obtain

As(s) - HsP~l(s) [As(s) - HsAs(s)] = Ore, =~ -g sP~ l ( s )As ( s ) + [In + Hsp~'I(s)Hs]AB(s) = Ore. (20)

Utilizing equation (16a) and equation (16b), we can have

-H6WI(8)psI(s)[All(S)AI(S) + A 1 2 ( s ) A 2 ( $ ) ]

+ (in + me l(8)m) ffl(s) [aSl(S): 1(s) + ass: 2(s)] = on, [-Hsp~-I(s)p;I(s)AI , (S) + (In + Hsp~-i(s)Hs) P~l(s)A2i(s)] "Al(s)

+ [-g6e l(s)P l(s)Al2(S) + (In + mWl(s)m) ffl(s)ass(s)] = on. (21)

Comparing equation (21) to equation (19b) gives

Nl(s) = K3(s)A11(8) "-b K4(s)A21(s), (22a)

Dr(s) = -K3(s)A12(s) - K4(s)A22(s), (22b)

where Ka(s) = -HsP~-l(s)p~l(s) and K4(s) = [In + HsP~-I(s)Hs]p~I(s). In the above case, we have successfully obtained one solution of the right Diophantine equation and an irreducible left MFD from an irreducible right MFD. For general cases, the procedures axe stated as follows.

3.1. The Procedures for Determining One Solution of the Right Diophantine Equation, X ( s ) Nr( s ) +Y(s)Dr(s) = In

STEP 1. Initiate with Al(S) = Dr(s) and A2(s) = Nr(s). Obtain co-column-reduced matrix polynomials Al(s) and As(s) and non-singular left polynomial matrices P1(8) and P2(s), respec- tively, where Xl(S) = Pl(S)Al(S), Al(s) =- AI,1 + A1,2s + Al,ss 2 + . . . , Xs(s) = P2(s)A2(s) and As(s) =- A2,1 + A2,2s + A2,ss 2 + " ..

STEP 2. Compute H~ = Ai,IA~I,1, then determine Ai+2(s) = Ai(s) - HiA~+I(S) and -Ai+2(s) = Pi+2(s)Ai+2(s) for i = 1, 2 , . . . , till A--~ = On, where Ai+s(s) -~ Ai+2,1 + Ai+2,2s + A~+s,as 2 + . . . f o r / = 1 , 2 , . . . , w - 3.

STEP 3. Calculate

A j ( s ) = [ P~ l l ( s ) -H2j - IP~I(s ) ] , (23)

-I-I2jP~1+1(s)p~ii(s) P~l(s) + I-I2jp~I+I(S)Hsj-Ip~I(8) for j = 1, 2 , . . . , (w/2) - 2 i f ~ is even and w > 4; for j = 1, 2 , . . . , ((w - 1)/2) - 2 i f w is odd and w > 3. Then, compute

All(S) A12(8) I~j=1 A(~/2-D-j, for an even w and w > 4, A ( S ) --~ --~ ($u--1)/2--2

ASl(S ) A22(S ) 1-Ij= 1 A((~,_I)IS_I)_j, for an odd w and w > 3. (24)

Page 7: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

Diophantine equations 61

For w = 4 or 3, we force A(s) to be an identity matrix, that is to say, An(s) = A22(,) = I. and Am(s ) = Azl(S) = On . For w = 3, one has Dr(s ) - H i N r ( s ) = Ore, then Y(s) = Ira and X(s) = HI . Note tha t w is a positive integer greater than two.

STEP 4. Determine

Y(8) =K1(8)All($)-bK2(s)A21(8), X(8) --Kl(a)A12(a)-bK2(a)A22(8),

(25a)

(25b)

where

K1(8) = ('Aul-1)-1(s)P~v13(s),

for an even w and

K2(8) = - (~- l ) -1(s)Hw-3p~12(s) , (26)

K1 (s) = - ( ~ . _ 1)- l ( s ) ~ . - , p~I2 (s)PjI 4 (.), K2(8) -- (~[w-1)-1(8) Jim -}- Hw-3P~22(8)Hw-4] P~23(8),

(27a) (27b)

for an odd w. |

4.~. Conversion of an Irreducible Right MFD to an Irreducible Left MFD; Nr(s) D~l(s) ---, D~-* (s)Nl(s)

STEP 1 and STEP 2 are the same as those of Section 4.1.

STEP 3. Calcu la te /~ j (s) = the right par t of equation (23) for j = 1, 2 , . . . , (w/2) - 2 if w is even and w > 4; j = 1, 2 , . . . , ((w - 1)/2) - 1 if w is even and w > 3; (note tha t the par t for an odd w is different from tha t of Section 4.1, hence, we use the " - " symbol,) Therefore, compute

Al1(8) A12(8) j----1 (w/2-1). j , for an even w and w > 4, A(s) - n<~_,/2_2x /~21($) A22($) j=l ~ ( ( w - n / 2 - 1 ) - j , for an odd w and w > 3.

(28) For w = 4 or 3, we force A(s) to be an identity matrix, tha t is to s a y , / i n ( S ) = zi22(s) = I m and A12(8) ----- A21(8) ---- Ore.

STEP 4. Obta in

N,(s) = K.(s)AII(S) + K4(.)A21(s), D,(s) = - K , ( s ) A i a ( s ) - K4(s)A22(s),

(29a)

(29b)

where

K 3 ( s ) = -H,.-2Pjll(s)Pj23(s), K4(s) = [ In + H ~ . - 2 P ~ _ I ( S ) H . - 3 ] P~,-*2(s),

(30a) (30b)

for an even w and

Ks(s) = P=_12(s ), K4(s) = -Hw-~P.7_II(S), (31)

for an odd w. 1

For a reducible right MFD, we can use Section 3.1 to obtain the corresponding irreducible right MFD, then perform the above procedures to obtain an irreducible left MFD.

In the similar manner, we can t rea t the left Diophantine equa t i on and left MFD via the generalized left ma t r ix Euclidean algorithm. We directly s ta te the procedures for general cases as follows.

Page 8: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

62 J.S.H. TSM et al.

4.3. The Left Diophantine Equation, Nl(s)X(s) ÷ Dl(s)Y(s) =Im

STEP 1. Initiate with S l (8) = Dl(s) and B2(s) = Nl(s). Obtain co-row-reduced matr ix polyno- mials B1 (s) and B2(s) and non-singular right polynomial matrices QI(S) and Q2(s), respectively, where -Bl(S) = Bl(S)Ql(S), Bl(S) = B1,1 + B1,2s + B1,3s 2 + " ' , B2(s) = B2(s)Q2(s) and B2(S) -- B2,1 ÷ B2,2s ÷ B2,as 2 ÷ ' " ".

STEP 2. Compute M~ = B~+1,1B~,1, then determine B~+2(s) = Bi(s)- B¢+I(s)M~ and B~+2(s) =

B~+2(s)Q~+2(s) for i = 1, 2 , . . . , till -Bw = Ore, where Bi+2(s) -= B~+2,1 + Bi+2,2s + B~+2,3s 2 +- -" for i = 1 , 2 , . . . , w - 3 .

STEP 3. Calculate

Aj(s)= [ Q~l-l(s) -Q2/-I(S)Q2j~I(S)M2j ] , (32)

-Q;jl(s)M2j-1 Q~l(s) + Q~jl(s)M2j-IQ2jl+I(S)M2j

for j = 1 , 2 , . . . , (w/2) - 2 if w is even and w > 4; for j = 1, 2 , . . . , ((w - 1)/2) - 2 if w is odd and w > 3, then compute

[ ]{ h i s ) _ All(S) A12($) : **j=l -J , (33) I-T(w-- I)/2--2 A

h21(s) A22(s) "~j=l ~'J '

For w = 4 or 3, we force A(s) to be an identity matrix, tha t is to say, All(S) = A22(s) = I m and hl2(S ) = A21(8) -~- Om.

STEP 4. Obtain

for an even w and w > 4,

for an odd w and w > 3.

Y(s) = All(S)JI(S) ÷ hl2(S)J2(s), X(s) = h21(S)Jl(S) ÷ h22(s)J2(s),

(34a)

(34b)

where Jl(S) = Q;~s(s)(-Bw-1)-l(s),

for an even w and

J2(s) = -Q~2(s)Mw-3(-B~-I)-I(s), (35)

J l ( s ) = -Qw_4(s)Q~_2(s)Hw-3(Bw-1)-I - 1 - - - 1 ( s ) ,

--1 J2(s) = Q~zl3(8) [Ira ÷ Mw-4Qw-2(s )Mw-3] ( B w - 1 ) - l ( s ) ,

(36a) (36b)

~ r a n o d d w. |

J.J. Conversion of an Irreducible Left MFD to an Irreducible Right MFD; D~l(s) Nl(s) Yr(s)D~l(s)

STEP 1 and STEP 2 are the same as those of Section 4.3.

STEP 3. Calculate/~j(s) = the right part of equation (32) for j = 1, 2,..., (w/2) - 2 if w is even and w > 4; j = 1, 2,..., ((w 1)/2) - 1 if w is even and w > 3, then compute

[ ]{ /~k11(S) /~12(8) ~ j = l "'J, for an even w and w > 4, (37) ~ . ( s ) - = n ( ~ _ 1 ) / 2 _ 2 ~

/~21(s) /~22(s) " j = l , . j , for an odd w and w > 3.

For w = 4 or 3, we force/~(s) to be an identity matrix.

STEP 4. Obtain

g~(s) = £11(s)J3(s) + h12(s)J4(s), Dr( s ) -- -A21(s)J3(s) - A22(s)J4(s),

(38a) (38b)

Page 9: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

Diophantine equations 63

where

for an even w and

J3(s) = -Q~l_3(s)Q;11(s)M~o-2, -1 s [Ira M --1 d4(s) = Qw-2( ) + w-3Qw-l(S)Mw-2] ,

(39a)

(39b)

J3(s) = Q~X_2(s), J,l(s) = -Q~l__l(S)Mtv_2, (40)

for an odd w. |

For a reducible left MFD, we can apply Section 3.2 to obtain the corresponding irreducible right MFD, then perform the above procedures to obtain an irreducible right MFD.

Note that the solution of the Diophantine equation is not unique. For the right Diophantine equation, one may readily obtain two polynomial matrices X(s) and Y(s), satisfying X(s)Nr(s) +Y(s)Dr(s) =Im and another two polynomial matrices D~(s) and N~(s), satisfying Dz(s)Nr(s) +N~(s)D~(s) = Ore, by the above procedures. Let X(s) = X(s) + W(s)Dt(s) and Y_(s) = Y(s) +W(s)Nl(s), where W(s) is not a singular matrix. It is easy to show that X(s) and Y(s) constitute the global solution of the right Diophantine equation with the parameter W(s). One can have the global solution for the left Diophantine equation in the similar way.

The proofs for Sections 4.1-4.4 are evident by the illustrative case shown in the beginning of this section in detail. Illustrative numerical examples are given in Section 7.

5. MODIFIED P R O C E D U R E S FOR N O N - S Q U A R E CASES

Consider the following matrix transfer function T(s) E R[s] "°xn' which is a product of a polynomial matrix A2(s) E R[s] n°×n~ and the inversion of another polynomial matrix Al(s) E R[s] qxq, where q -- min(no, ni),

T(s) = A2(s)All(s) = [A2,1 + A2,2s + A2,sS 2 + ' - . ] x [A1,1 + A1,2s + A1,3s 2 + . . . ] - 1 , (41a)

or

T(s) = A~l(s)A2(s) = [AI,1 + A1,2s + A1,3s 2 + . . . ] - 1 x [A2,1 + A2,2s + A2,3s 2 + . . . ] . (41b)

For the case ni = no, the matrix Routh algorithm [22] and its reverse process [27] of the algorithm have been studied in detail. For more general usages, the matrix Routh algorithm has been extended for the non-square case (no # hi).

The procedures in Sections 3 and 4 for the square case are also based on the generalized matrix continued-fraction algorithms. We have mentioned that the matrix Euclidean algorithms are the general form of the matrix Routh algorithms. Following the work of modified procedures of the matrix Routh algorithms for the non-square case, we discuss modified procedures of the matrix Euclidean algorithm for the non-square case by the following study cases.

5.1. T(s) = ~2(8) (A1) -1(8) , where AI(8) --~ PI(s)AI(8) and A2(s) = P2(s)A2(s)

CASE 1. no >_ ni, T(s) E R[s] n°xn', Al(s) E R[s] n'xn', "A2(s) E R[s] n°xn',

T(s) = A 2 ( s ) ( ~ I ) - I ( s ) = - g r ( s ) - D r ( s ) - 1 = Nr(s)R(s)[Dr(s)R(s)] -1 = Nr(s)D;I(s). (42)

m m

The R(s) is the common right divisor of the Nr(s) and Dr(s). The left polynomial matrix P~(s) transforms a non-singular polynomial matrix Ai(s) to a co-row-reduced matrix polynomial Ai(s).

The generalized matrix Euclidean algorithm is

Hi = Ai, IA~'ll,1,

Ai+2(s) = Ai(s) - HiAi+l(s) = P~+2(s)Ai+2(s),

till Aw = a null matrix or a singular one.

rank(Ai,1) -- hi, /

i = 1 , 2 , . . . , w - 1, / (43)

Page 10: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

64 J.S.H. TSAI et al.

The constant matrices Hi with appropriate size are called the matrix quotients. A - I tAT ~ 1-1AT i+l,t := the pseudo-inverse of Ai+l,l = t~i+~,l~+~,lj ~+1,1 is the left inverse of A~+t,1.

CASE 2. nO __< h i , T(s) e R[s] "°×"', AI(S) e R[s] "'×"°, A2(s) E R[s] "°x"°,

T(s) = A 2 ( 8 ) ( A 1 ) - l ( 8 ) = DI(s) - IR(s ) [N~-I(s)R(s)] - i = DI(s)-INzCs).

If no > n~,

(44)

The R(s) is the common right divisor of Al(s) and A2(s). The generalized matrix Euclidean algorithm is the same as that of equation (43) except that rank (Ai,1) = no. Dz(s) and Nl(s) can be obtained by the procedure in Section 4.2.

5.2. T(s) = (B1)-l(s)B2(s) where -Bl(s) = Bl(s)Ql(S) and B2(s) = B2(s)Q2(s)

CASE 1. no <_ hi, T(s) E R[s] '~°x"', Bl(s) E n[s] "°x"°, B2(s) E R[s] n°x ' ' ,

T(s) = B l ( s ) - l B 2 ( s ) = [L(s)Dl(S)]-lL(s)Nl(S) = D~(s)-lNt(s). (45)

The L(s) is the common left divisor of the Bl(s) and B2(s). The right polynomial matrix Q~(s) transforms a non-singular polynomial matrix Bi(s) to a co-column-reduced matrix polynomial B~(s).

The newly generalized matrix Euclidean algorithm is

Mi = B~-+ll,IBi,1, rank(B~,l) = hi, ]

-Bi+2(s) = B~(s) - Bi+l(S)Mi = Bi+2(s)Oi+2(s), i = 1,2 . . . . ,w - 1, / (46)

till Bw = a null matrix or a singular one.

The constant matrices Mi with appropriate size are called the matrix quotients. If no < hi, B -1 B T [Bi+t , IB~I 11-1 is the right inverse of the B~+I,i. i+1,1 :~--- i+1,1 ,

CASE 2. no >_ n~, T(s) E R[s] n°xn', Bl(S) E R[s] " 'x"°, B2(s) E R[s] " 'xn' ,

T(s) = Bl(S)- lB~(s) = [L(s)Nr(s) -1]-1 n ( s ) D ; l ( s ) = Nr ( s )D; l ( s ) . (47)

The L(s) is the common left divisor of Bl(s) and B2(s). The generalized matrix Euclidean algorithm is the same as that of equation (46) except that rank (B~,I) = no. Nr(s) and Dr(s) can be determined by the procedures in Section 4.4.

By using Gilbert's theorem, it has been shown [12] that the dynamic state equation, which are constructed by using matrix quotients Hi or Mi that are obtained from the matrix Routh algorithms, are minimal realizations of the T(s). Concerning the generaliz~l matrix continued- fraction methods, realizations can be found in the work of [21].

Based on the above case studies, one can modify the procedures proposed in Sections 3 and 4 for treating a non-square transfer function matrix. One numerical example of a non-square case is illustrated in Section 7.

6. MULTI-VARIABLE P O L E - A S S I G N M E N T P R O B L E M S

Consider the multi-variable plant

T(s) = Nr(s ) (Dr) - l ( s ) ,

with a controller

c ( s ) =

in the unit feedback system as shown in Figure 1.

(48)

(49)

Page 11: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

Diophantine equations 65

Input

I

+ r-,,.

Plant Controller

Ycl(s)Xc(s) Output

Figure 1. Uni t feedback system with the plant in the reducible right MFD form.

Utilizing Section 3.1, we can have the corresponding irreducible right MFD, Nr (s)D~ l(s), then we can deal with the multi-variable control system in the form of Figure 2.

Plant Controller

Nr(s)D~ l(s) __ Yc l(s)xc(s) m,'----~i - - - - -~ /

Figure 2. Unit feedback system with the plant in the irreducible right MFD form.

The return difference equation of the multi-variable control system is

I + Yc-l(s)Xc(s)N~(s)D~l(s) = O, (50a)

or

Yc(s)Dr(s) + Xc(s)Nr(s) = O, (50b)

where I and O denote the identity matrix and null matrix with appropriate dimensions, re- spectively. Denote E(s) as the desired characteristic matrix polynomial. The multi-variable pole-assignment problem becomes designing the controller Yc-l(s)Xc(s) to satisfy

Yc(s)Dr(s) + Xc(s)Nr(s) = E(s). (51)

For the square case, given Dr(s) and Nr(s), one can find X(s), Y(s), Nl(s) and Dl(s), by the procedures proposed in Sections 3-5, satisfying

Y(s)Dr(s) + X(s)Nr(s) = I, (52a)

Nt(s)D~(s) - Dt(s)Nr(s) = O. (525)

Subtracting E(s )x equation (6.6) from equation (6.5), one has

[Yc(s) - E(s)Y(s)]D~(s) + [Xc(s) - E(s)X(s)]N~(s) = O. (53)

From equation (6.8) and equation (6.7), we can get the desired controller Yg-l(s)Xc(s) where

Yc(s) = N~(s) + E(s)Y(s), Xc(s) = -Dr(s) + E(s)X(s).

If we describe the plant via the left MFD,

T(s) =

(54a)

(5ab)

(55)

Page 12: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

66 J.S.H. TsA1 et al.

then the controller is c ( s ) = xo( s ) r i - l ( s ) .

The corresponding multi-variable control system is shown in Figure 3.

Controller Plant

Inputs_ Output

(56)

Figure 3. Unit feedback system with the plant in the reducible left MFD form.

In the similar manner, we can get an irreducible left MFD via the procedures in Section 3.2. The corresponding multi-variable control system is shown in Figure 4.

Input +

Controller Plant

Xc(s)Y~-l(s) ~ D[l(s)Nl(s) Output

Figure 4. Unit feedback system with the plant in the irreducible left MFD form.

From the obtained Dl (s) and Nz(s), one can get X(s), Y(s), Nr(s) and Dr(s) by the procedures proposed in Sections 3-5. Thus, one has

Yc(s) = N,(s) + Y(s)E(s), (57a)

Xc(s) = -Dr(s) + X(s)E(s), (57b)

where E(s) is the desired characteristic polynomial matrix. Illustrative examples are given in the following section.

For the non-square cases, we may have the pole-assignment problem in the form of

Yc(s)Dr(s) + Xe(s)Nr(s) = Yc(s)E(s)Dr(s). (58)

From equation (58), one can have

Yc(s)Dr(s)[I - E(s)] + Xc(s)Nr(s) -- O. (59)

Given Dr(s), Nr(s) and specified E(s), one can find Yc(s) and Xe(s) by the procedures proposed in Sections 3-5, satisfying

Yt(s)'-Dr(s) - Dl(s)-Nr(s) = O, (60)

where Dr(s) = Dr(s)[I - E(s)] and Nr(s) = Nr(s). The computed Nt(s) and Dl(s) are equal to Yc(s) and -Xe(s), respectively. The desired controller Yel(s)Xc(s) is obtained. For the left MFD, one can approach the non-square cases by the similar way.

Page 13: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

Diophantine equations 67

7. ILLUSTRATIVE EXAMPLES

EXAMPLE 1. Consider a reducible right MFD as

( s -1 ) ( -~+s,+2s, T ( s ) = ~ ( s ) ( ~ ) - l ( s ) = - 3 + 2 s + s ~ - 4 + s 2 3 - 2 s - s ~

- 9 + 2s + 2s 2 4--8--8 2 )

-1

It is required to find

(i) the greatest common right divisor of Nr(s) and Dr(s), R(s) ,

(ii) the solution of the right Diophantine equation

X(s )N~(s ) + Y(s )D~(s) = I,

where Nr(s) = Nr(s )R(s ) , -D~(s) = Dr(s )R(s ) and {Nr(s), Dr(s)} is a right co-prime pair,

(iii) the left co-prime pair {Nl(s), Dr(s)} such that T(s) = D~l(s)N~(s) ,

(iv) the controller Y[ ' l ( s )Xe(s ) such that the multi-variable control system T(s ) in the unit feedback system, shown in Figure 1, can have the desired characteristic polynomial matrix E(s) , where

E(s) = ( 2 + s-1 0 ) 0 2 ÷ 2s-! + s -2 "

SOLUTION.

(i) STEP 1. Initiate with Al(S) - -Dr(s) and A2(s) - -Nr(s) , then choose

(10) ,n~ ~(s> (1 0) p l ( s ) = o 1 1 1 "

The computed

(10)[(0 °) ( , ~) (~ ~)] A l ( S ) = 0 1 3 4 + - 2 -1 s + -1 -1 s2

= Pl(s) [AI,1 + A1,2s + Ai,3s 2] = PI(s )AI(s ) ,

(10) [ (0_1) (~ 0) (o 0)] "A2(s)= 1 1 - 3 - 4 + 2 0 s + 1 1 s2

= P2(s) [A2,1 + A2,2s + A2,3s 2] = P2(s)A2(s).

STEP 2. The obtained (1 2) HI -- AI,IA~,~ = 0 - 1 '

A3(s) = Al ( s ) - HIA2(s) = 0 - 1 s + 0 0

(, 0)[(~ ,) (0 0)] = 0 s 0 - 1 + 0 0 s = P3(s)[A3,1 +As,2s] = P3(s)As(s) ,

( o l ) H2 = A2,1A3,} = - 1 2 '

Page 14: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

68 J.S.H. TSAI et al.

(10) (~ 0)8 2 A 4 ( 8 ) = A 2 ( s ) - H2As(8) = 2 0 s + 1

8 1 0 0 0 =(28 802)[(1 1)+(0 0) 8 ] A - i ( 1 H3 -- 3'1A4,1 ~" 1

%

(° ;t5(8) = A3(s) - H3.44(8) = 0

= P~(s)[A4,~ + A¢28] = P4(8)A4(8),

0 o). STEP 3. The greatest common right divisor is

R(8) =A4(s) = 1 1 "

Since R(s) is a constant matrix, we can force R(s) to be an identity matrix, i.e., R(8) = 12.

(ii) From (i), we have the irreducible fight MFD,

z>,(8) = ~,(8)R-ICs) = ~ ( 8 ) NJs ) = ~ , (8 )R-I (8 ) -- ~ ( s ) ,

i.e., ( D r ( s ) , N r ( s ) } is already a right co-prime pair.

STEPS i AND 2. Since the constant matr ix R(8) is set to be identity matrix, one can get the same Steps 1 and 2 shown in (i).

STEP 3. For ~o -- 5, one has

A = O 2

i.e., A l l = A22 = h and A12 ----- A21 = 0 2.

0 2 ) / 2 '

STEP 4. The computed

Kl(8)=_(-f i4)_l(s)H2p~l(s)p~_l(s) = ( 0 --8 -2 ) 8--3 8--2 ,

8-- 1 K2(s) -- (;~,)-1(8) [h + ~2P~-~(8)H1] P i % ) ; _8-1 _ 28-2 _ s-~

We can get

( 0 - s - 2 ) Y(s) = Kl(8)All + g2(8)A21 = K1(8) -- s_3 s_2 ,

8-1 X(8) = / f l ( s )A12 + K2(8)'%2 = K2(8) = _8_1 _ 28_2 _ s_3

_8-2 28-2 _ 28-3 ]

f

--8-2 ) 28-2 _ 2s-3 •

(iii) From (ii), we can have

A = AI = f Pit(s)

\ -H2P3-1(s)pi-l(s) i.e.,

/Till =

/~21 =

lo) 0 1 '

0 - 8 -1 ) 8-1 _28-1 ,

-H1P21(s) )

P~l (s ) + e 2 e f t ( s ) S l e ~ l ( s ) '

/ -1 -2 ) 0 1 ) ' 1 - s -1

A 2 2 = - s - 1 1 - 4 8 - 1 "

Page 15: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

Diophantine equations 69

The c o m p u t e d

K3(s) = P -l(s) = ( s-1 0

K 4 ( s ) - - - H 3 p 4 1 ( s ) = (

Then one can obtain

o) 8--1

__S--I -}_4S--2 __28--2 '~ _ 8 - - 1 _ 28-2 s - 2 ) "

-1 -- 28 -3 8 - 2 / N l ( s ) ~--- K3(8)/~11 -~ K4~k21 = s _ 3 8_ 1 .~_ 8_ 2 ,

s -1 + 2 s . 2 + s -3 _ s - 1 _ 2 s - 2 + 2 s - 3 •

(iv) We get

Yc(s) = N,(s) + E(s)Y(s) = ( 8-1 -- 2S-3 --S-2--S-3 ) 39 - 3 - { - 2 9 - 4 + s - 5 S -1 -4 -39 - 2 + 2 8 - 3 + s - 4 '

Xc(s) = -D,(s) + E(s)X(s) ( 59-2+29-3 _2s-1_3s-2+39-3 )

----- _ 3 S - 1 _ 8 8 - 2 _ 8 8 - 3 _ 4 8 - 4 _ S -5 8 -1 _{_ 69 - 2 _ 28 -3 _ 2 8 - 4 _ 2 8 - 5 ,

and the controller is

1 Y~-l(s)Xc(s) = s 6 + 395 - 2s 3 + s 2 + s + 1

( 295+694-392-39-1 - 2 s 6 - 8 9 5 - 3 9 4 + 5 9 3 - 9 2 - s - 2 ) -396 - 895 - 294 - 393 - s 2 - s s 6 + 695 + 294 - s 3 + s 2 + s + 1

Clearly, the above designed controller is proper. One may use techiques of model reduc- t ion [28] to reduce the order of the controller in realization.

EXAMPLE 2. Consider a reducible left MFD as

i 0) - 2 - s - 2 + s 2 4 - 1 1 "

Note that T(s) is not a square polynomial matrix. It is desired to find

(i) the greatest common right divisor of Nl(s) and Dz(s); L(s),

(ii) the solution of the right Diophantine equation

N~(s)X(s) + Dz(s)Y(s) = I,

where -Nl ( s ) = L(s)Nl(S), -Dl(s) = L(s)Dl(s) and {Nl(s), Dl(s)} is a r ight co-prime pair,

Off) the left co-pr ime pair {Nr(s), Dr(s )} such t h a t T(s) = Nr(s)D~I(s).

SOLUTION.

(i) STEP 1. The computed

[(o_1) (i i) (i 0) B I ( S ) = D I ( s ) = - 2 - 2 + - 1 0 s + 0 1 92 0 1

= [91,1 "4- 81,28 + 91,3(8)82] {~1(8) = 91(S)Q1(S), ZS-IZ-F

Page 16: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

70 J.S.H. Ts~ et a/.

B2(s)=g ' (s)= I / -34 -10 0/1 + / 0 1 0 ) 0 0 0 s] ( 1 0 0 ) 0 0 01 01

= [B2,1 + B2,28] Q2(s) -- B2(s)Q2(8).

STEP 2. The obtained

B T T - 1 ( 0 0.33 ) M1 = 2,1(B2,1B2,1) B1,1 = I 1.67 ,

- I -1.67

(ii)

[( o o) BS(s) = BI(S) - B2(s)M1 = -1 0 + 0 1 s 0 s

----- [B3,1 + B3,2s]Q3(8) = B3(s)Q3(8),

M2=B~,,~B2,1= ( -4 1 -1 ) 1.12 0 0 '

I( ) ( 4 0 1 0 0 0 0 s O - B 4 ( s ) = B 2 ( s ) - B s ( s ) M 2 = -1.12 0 0 + 0 0 0 0 0 s

= [B4,1 + B4,2s]Q4(s) = B4(s)Q4(s),

0.89 0 ) Ms = B~I(B,,IBT1)-IB3,1 = 0 0 ,

-3.56 -2.67

o) Bs(s) = Bs(s)- B4(s)Ma = 0 1 0 s

= Ss, lQs(s) = Bs(s)Qs(s),

4 0 1 ) M4=B~,~S,,I= -1.12 o o '

-~6(s) = B4(s)_ Ba(s)M4= ( O O ) 0 0 "

Since Bs(s) is a constant matrix, the greatest common left divisor of Nl(s) and Dl(s) can be set to an identity matrix, i.e.,

1 o ] = 1 2 , L(s)= o 1

and {Nl(s),Dl(s)) is already a left co-prime pair.

Since Nl(s) = -Nl(s) and D~(s) = ~l(s), the required Step 1 and Step 2 can be obtained from (i) with w -- 6.

STEP 3. The computed

A = A I = [ Q~1(8) [ -Q~I M1

] [ A n A12] - Q l t (S)Q31(s)M2 _ ,

Q~l(s) + Q~I(s)MIQ~I(s)M2 A21 A22

Page 17: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

Diophantine equations 71

where

(10) A l l = 0 1 '

A21 = - 1 - 1 . 6 7 , 1 1.67

A12 =

A22 =

4s_1 _8-1 8-1 ) --1.12s -1 0 0 '

- 2 . 1 3 s -1 1 + s -1 - s -1 •

2.13s -1 - s -1 l + s -1

STEP 4. For w = 6 is even, one has

J l ( s ) = Q31(s)(-Bs)-l(s) = 0 s -2 '

( -0.89s -~ J2(s) = -Q41(s)Ma(Qs)-I(s) = 0

3.56S - 2

We o b t a i n

Y(s) = AllJl(S) -{- A12J2(s) =

X(s) = h21gl(s) + A22J2(s) =

s - 2 2"67s-3 / S-3 S--2

--0.89s 2 -- 0.33s - 3 _ s - 2 _ 1 .67s-3

4.56s -2 + 1.67s - 3

0) 0

2.67s - 2

- 0 " 3 3 s - 2 ) _ 1 . 6 7 s - 2 - 2 .67s -3 . 4.34s - 2 + 2.67s - 3

(iii) For w = 6,

/~(8) =/~1__ ( /~11 /~12 ) h21 A22 '

is equal to A(s ) , w h e r e / ~ j = Ai j for i = 1, 2 and j = 1, 2. Also, t he c o m p u t e d

--4s -2 0 --S -2 ) Js(S)=-Q31(s)Q51(s)M4 = 1.12s - 2 0 0 '

J a ( s ) = Q 4 1 ( s ) + Q41(s)M3Q51(s)M4 = (

T h e n , o b t a i n

3s_3 Nr(s) = A l l J 3 ( 8 ) -}- A12J4(8) = __48-3

Dr(s) = - /~21J3(s ) - A22J4(s)

_ s - 1 _ 3.56s - 2 _ 1.33s - 3 = --3.67s - 3

11.24s -2 + 3.67s -3

s -1 + 3.56s - 2 0 0.89s - 2 0 8 -1 0 fl .

--11.25S - 2 0 S -1 -- 3.56S -1

-s -2 0 ) 0 _S-3 ,

0 _8-1 _ 8-2

8-2

_0 .89s - 2 _ 0.338 - 3 - - 1 " 6 7 s - 3 )

- - s -1 + 3.56s - 2 + 1.67s - a

8. C O N C L U S I O N S

Without utilizing the minimal state-space realization of a transfer function matrix, efficient al- gorithms are developed for solving the Diophantine equations, and finding the greatest common divisors and conversions of the fight(left) to the left(right) MFDs via the generalized matrix Eu- clidean algorithms. Multi-variable pole-assignment techniques are also discussed for engineering applications. The generalized matrix Euclidean algorithms developed in this paper provide an effective computer-aided method for the analysis and synthesis of multi-variable control systems described by MFDs.

Page 18: Computers Math. Applic. Printed in Great Britain. All ... · With the above definitions, the procedures of right and left matrix continued-fraction methods are presented, respectively

72 J.S.H. TSAI et al.

REFERENCES

1. H.H. Rosenbrock, State Space and Multivariable Systems, Nelson, London, (1970). 2. W.A. Wolovich, Linear Multivariable Systems, Springer-Verlag, N.Y., (1974). 3. V. Kucera,, Discrete Linear Control, Wiley, N.Y., (1979). 4. T. Kailath, Linear Systems, Prentice-Hall, Englewood Cliffs, N.J., (1980). 5. F.M. Callier and C.A. Desoer, Multivariable Feedback Systems, Springer-Verlag, N.Y., (1982). 6. C.T. Chen, Linear System Theory and Design, Holt Rinehart and Winston, N.Y., (1984). 7. S. Barnett, Polynomials and Linear Control Systems, Marcel Dekker, N.Y., (1983). 8. S. Barnett, Matrices in Control Theory, Robert E. Kreieger Publishing Co., Malabar, Florida, (1984). 9. W.A. Wolovich and P.J. Antsaklis, The canonical Diophantine equations with applications, SIAM J. Control

Optim. 22 (5), 777-787 (1984). 10. W.A. Wolovich, A division algorithm for polynomial matrices, IEEE Trans. Autom. Control AC°29, 656-658

(1984). 11. P.I. Gohberg, Lancaster and L. Rodman, Matrix Polynomials, Academic, N.Y., (1982). 12. L.S. Shieh and F.F. Gaudiano, Some properties and applications of matrix continued fractions, IEEE Trans.

Circuits Syst. 22, 721-728 (1975). 13. L.S. Shieh, F.R. Chang and R.E. Yates, The generalized matrix continued-fraction descriptions in the second

Cauer form, IEEE Trans. Aurora. Control AC-30 (8), 813-816 (1985). 14. D.H. Owens and A.D., Field nested-feedback-loop decompositions for reduction of linear multivariable sys-

terns, Porc. Inst. Elec. Eng. 127, (Part D) (3), 103-114 (1980). 15. L.S. Shieh and A. Tajvari, Analysis and synthesis of matrix transfer functions using the new block-state

equations in block-tridiagonal forms, Proc. Inst. Elec. Eng. 127 (Part D), 19-31 (1980). 16. L.S. Shieh, C.H. Chen and N.P. Coleman, Stability of a class of multivariable control systems described by

the nested matrix continued-fraction description, Int. J. Systems Sci. 15, 439-457 (1984). 17. L.S. Shieh, M.M. Mehio, and H.M. Dib, Stability of the second-order matrix polynomial, IEEE Trans.

Automatic Control AC-32 (3), 231-133 (1987). 18. C.F. Chen, Model reduction of multivariable control systems by means of continued fractions, Int. J. Contr.

20, 225-238 (1974). 19. L.S. Shieh, J.M. Navarro and I~.E. Yates, Multivariable systems identification via frequency response, Proc.

IEEE 62 (8), 1169-1171 (1974). 20. L.S. Shieh, C.D. Shih and R.E. Yates, Some sufficient and some necessary conditions for stability of multi-

variable systems, ASME J. Dynam. Syst. Meas. Contr. 100, 214-218 (1978). 21. L.S. Shieh, S. Yeh and H.Y. Zhang, Realizations of matrix transfer functions using RC ladders and summers,

Proc. Inst. Elec. Eng. 128 (Part G) (3), 101-110 (1981). 22. L.S. Shieh, Y.J. Wang and Y.L. Bao, Canonical model transformations via the long division and continued

fraction methods, Int. J. Systems Sci. 20 (6), 1019-1033 (1989). 23. H.H. Rosenbrock, Structural properties of linear dynamical systems, Int. J. Control 20 (2), 191-202 (1974). 24. L.S. Shieh, Y.J. Wei and J.M. Navarro, An algebraic method to determine the common divisor, pole and

transmission zeros of matrix transfer functions, Int. J. Systems Sci. 9 (9), 949-964 (1978). 25. L.S. Shieh, C.G. Patel and H.Z. Chow, Additional properties and applications of matrix continued fraction,

Int. J. Systems Sci. 8, 97-107 (1977). 26. I.S. Krishnarao and C.T. Chen, Two polynomial matrix operations, IEEE Trans. Automat. Contr. A C - 2 9

(4), 346-348 (1984). 27. K.J..~str6m and B. Wittenmark, Computer Controlled Systems Theory and Design, Prentice-Hall, Engle-

wood Cliffs, N.J., (1984). 28. L.S. Shieh and Y.J. Wei, A mixed method for multivariable system reduction, IEEE Trans. Aurora. Control

AC-20, 429-432 (1975).