27
Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George, IIST, Thiruvananthapuram email: [email protected], [email protected] Consider the n-dimensional control system described by the vector differential equation : ˙ x(t) = A(t)x(t)+ B(t)u(t), t (t 0 , ) (1) x(t 0 ) = x 0 where, A(t)=(a ij (t)) n×n is an n × n matrix with entries are continuous functions of t defined on I =[t 0 ,t 1 ], B(t)=(b ij (t)) n×m is an n × m matrix with entries are continuous function of t on I . The state x(t) is an n-vector, control u(t) is an m-vector. We first deal with controllability of one dimensional system which described by a scalar differential equation. What is a control system ? : Consider a 1-dimensional system dx dt = -2x, x(0) = 3 The solution of the system is x(t)=3e -2t and its graph is shown in the following figure e -2t 3 If we add a nonhomogeneous term sin(t) called the forcing term or control term to it then the system is given by dx dt = -2x + sin(t) x(0) = 3 1

Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

  • Upload
    buingoc

  • View
    230

  • Download
    4

Embed Size (px)

Citation preview

Page 1: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

Controllability, Observability, Stability and Stabilizability of

Linear Systems

Raju K George, IIST, Thiruvananthapuram

email: [email protected], [email protected]

Consider the n-dimensional control system described by the vector differential equation :

x(t) = A(t)x(t) +B(t)u(t), t ∈ (t0,∞) (1)

x(t0) = x0

where, A(t) = (aij(t))n×n is an n × n matrix with entries are continuous functions of t defined onI = [t0, t1], B(t) = (bij(t))n×m is an n × m matrix with entries are continuous function of t on I.The state x(t) is an n-vector, control u(t) is an m-vector. We first deal with controllability of onedimensional system which described by a scalar differential equation.

What is a control system ?:

Consider a 1-dimensional systemdx

dt= −2x, x(0) = 3

The solution of the system is x(t) = 3e−2t and its graph is shown in the following figure

e−2t

3

If we add a nonhomogeneous term sin(t) called the forcing term or control term to it then the systemis given by

dx

dt= −2x+ sin(t)

x(0) = 3

1

Page 2: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

3

Observe that the solution or the trajectory of the system is changed. That is, the evolution of thesystem is changed by adding the new forcing term to the system. Thus the system with a forcingterm is called a control system.Controllability Problem: The controllability problem is to check the existence of a forcing term orcontrol function u(t) such that the corresponding solution of the system will pass through a desiredpoint x(t1) = x1 .We now show that the scalar control system

x = ax+ bu

x(t0) = x0

is controllable. We produce a control function u(t) such that the corresponding solution startingwith x(t0) = x0 also satisfies x(t1) = x1. Choose a differentiable function z(t) satisfying z(t0) =x0 and z(t1) = x1. For example, by the method of linear interpolation, z− x0 = x1−x0

t1−t0 (t− t0). Thusthe function

z(t) = x0 +(x1 − x0)t1 − t0

(t− t0)

satisfiesz(t0) = x0, z(t1) = x1

A Steering Control using z(t): The form of the control system

x = ax+ bu

motivates a control of the form

u =1

b[x− ax]

Thus we define a control using the funtion z by

u =1

b[z − az]

x = ax+ b[1

b[z − az]]

x− z = a(x− z)

d

dt(x− z) = a(x− z)

x(t0)− z(t0) = 0

Let y = x− zdy

dt= ay

2

Page 3: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

y(t0) = 0.

The unique solution of the system is y(t) = x(t)− z(t) = 0. That is, x(t) = z(t) is the solution of thecontrolled system satisfying the required end condition x(t0) = x0 and x(t1) = x1. Thus the controlfunction

u(t) =1

b[z(t)− az(t)] is a steering control.

Remark : Here we have not only controllability but the control steers the system along the giventrajectory z. This is a strong notion of controllability known as trajectory controllability. Trajectorycontrollability is possible for a time-dependent scalar system x = a(t)x + b(t)u : b(t) 6= 0 ∀ t ∈[t0, t1] In this case the steering control is

u =1

b(t)[z − a(t)z]

n-dimensional system with m=nConsider an n-dimensional system x = Ax+Bu,where Aand Bare n×n matrices and B is invertiblematrix. Now consider a control function as in the case of scalar systen, given by

u = B−1[z − Az]

where, z(t) is a n-vector valued and differentiable function satisfying z(t0) = x0 and z(t1) = x1.Using this control we have

x = Ax+BB−1[z − Az]

x− z = A(x− z)

x(t0)− z(t0) = 0

=⇒ x(t) = z(t)

Remark: If BB−1 = I, that is, if B has right inverse then also the system is trajectory controllable.When m < n:When m < n we consider the system

x = A(t)x+B(t)u (2)

x(t0) = x0

A(t) = (aij(t))n×n,

B(t) = (bij(t))n×m

Definition(Controllability) : The system (2) is controllable on an interval [t0, t1] if ∀ x0, x1 ∈IRn, ∃ controllable function u ∈ L2([t0, t1] : IRm) such that the corresponding solution of (2) satis-fying x(t0) = x0 also satisfies x(t1) = x1: Since x0 and x1 are arbitrary this notion is also known asexact controllability or complete controllability.

Subspace Controllability : Let D ⊂ IRn be a subspace of IRn and if the system is controllable forall x0, x1 ∈ D then we say that the system is controllable to the subspace D.

3

Page 4: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

Approximate Controllability: If D is dense in state space then the system is approximatelycontrollable. But in IRn, IRn is the only dense subspace of IRn. Thus approximate controllability isequivalent to complete controllability in IRn. For the subspace D we have

D ⊆ IRn and D = IRn implies D = IRn

Null Controllability : If every non - zero state x0 ∈ IRn can be steered to the null state 0 ∈ IRn

by a steering control then the system is said to be null controllable.We now see examples of controllable and uncontrollable systems.

Example: Tank Problem :

u

x1(t)

x2(t)

α

β

Let x1(t) be the water level in Tank 1 and x2(t) be the water level in Tank 2. Let α be the rateof outflow from Tank 1 and β be rate of outflow from Tank 2. Let u be the supply of water to thesystem. The system can be modelled into the following differential equations:

dx1dt

= −αx1 + u

dx2dt

= αx1 − βx2

Model - 1 :d

dt

(x1x2

)=

(−α 0α −β

)(x1x2

)+

(10

)u

Model - 2 :

4

Page 5: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

x1

x2

α

β

u

dx1dt

= −αx1dx2dt

= αx1 − βx2 + u

d

dt

(x1x2

)=

(−α 0α −β

)+

(01

)u

Obviously the second tank model is not controllable because supply can not change the water levelin Tank 1. We will see later that the Model 1 is controllable whereas the model 2 is not controllable.Controllability analysis can be made in many real life problems like :

(i) Rocket launching Problem, Satellite control and control of aircraft

(ii) Biological System : Sugar Level in blood

(iii) Defence: Missiles & Anti-missiles problems.

(iv) Economy- regulating inflation rate

(v) Eology: Predator - Prey system

Solution of the Controlled System using Transition Matrix :Consider the n-dimensional linear control system:

x = A(t)x+B(t)u, x(t0) = x0

Let Φ(t,t0) be the transition matrix of the homogeneous system x = A(t)x. The solution of thecontrol system is given by ( using variation of parameter method)

x(t) = Φ(t, t0)x0 +

∫ t

t0

Φ(t,τ)B(τ)u(τ)dτ

The system is controllable iff for arbitrary initial and final states x0, x1 there exists a control functionu such that

x1 = Φ(t1, t0)x0 +

∫ t1

t0

Φ(t1, τ)B(τ)u(τ)dτ

5

Page 6: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

We first show that for linear systems complete controllability and null-controllability are equivalent.Theorem :The linear system (1) is completely controllable iff it null-controllable.

Proof : It is obvious that complete controllability implies null-controllability.We now show thatnull-controllability implies complete controllability. Suppose that x0 is to be steered to x1.Suppose that the system is null-controllable and let w0 = x0−Φ(t0, t1)x1. Thus there exists a controlu such that

0 = Φ(t1, t0)w0 +

∫ t1

t0

Φ(t1, τ)B(τ)u(τ)dτ

= Φ(t1, t0)[x0 − Φ(t0, t1)x1] +

∫ t1

t0

Φ(t1, τ)B(τ)u(τ)dτ

= Φ(t1, t0)x0 − x1 +

∫ t1

t0

Φ(t1, τ)B(τ)u(τ)dτ

x1 = Φ(t1, t0)x0 +

∫ t1

t0

Φ(t1, τ)B(τ)u(τ)dτ

= x(t1)

=⇒ u steers x0 to x1 during [t0, t1]

Conditions for Controllability :

The system (1) is controllable iff ∃ u ∈ L2(I, IRm) such that

x1 = Φ(t0, t1)x0 +

∫ t1

t0

Φ(t1, τ)B(τ)u(τ)dτ

ie,x1 − Φ(t0, t1)x0 =

∫ t1

t0

Φ(t1, τ)B(τ)u(τ)dτ

Define an operator C : L2(I, IRm)→ IRn by

Cu =

∫ t1

t0

Φ(t1, τ)B(τ)u(τ)dτ

Obviously, C is a bounded linear operator and Range of C is a subspace of IRn. Since x0, x1 arearbitrary, the system is controllable iff C is onto.Range(C) is called the Reachabie set of the system.

Theorem : The following statements are equivalent:

1. The linear system (1) is completely controllable.

2. C is onto

3. C∗ is 1-1

4. CC∗ is 1-1

6

Page 7: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

In the above result, the operator C∗ is the adjoint of the operator C. We now obtain the explicitform of C∗.Adjoint Operator :The operator C : L2(I, IRm) → IRn defines its adjoint C∗ : IRn → L2(I, IRm) inthe following way:

< Cu, v >IRn = <

∫ t1

t0

Φ(t1, τ)B(τ)u(τ)dτ, v >IRn dτ

=

∫ t1

t0

< Φ(t1, τ)B(τ)u(τ)dτ, v >IRn dτ

=

∫ t1

t0

< u(τ), B∗(τ)Φ∗(t1, τ)v >IRn dτ

= < u,B∗()Φ∗(t1, τ)v >L2(I,IRm)

= < u,C∗v >L2(I,IRm)

(C∗v)(t) = B∗(t)Φ∗(t1, t)v

Using C* we get CC* in the form CC∗ =

∫ t1

t0

Φ(t1, τ)B(τ)B∗(τ)φ∗(t1, τ)dτ

Observe that CC∗ : IRn → IRn is a bounded linear operator. Thus, CC∗ is an n by n matrix.Thus we have from the previous theorem that the system (1) is controllable ⇐⇒ C is onto⇐⇒ CC∗ is 1-1⇐⇒ CC∗ is an invertible matrix.The matrix CC∗ is known as the Controllability Grammian for the linear system and is given byControllability Grammian

W (t0, t1) =

∫ t1

t0

Φ(t1, τ)B(τ)B∗(τ)Φ∗(t1, τ)dτ

By using inverse of the controllability Grammian we now define a steering control as given in thefollowing theorem.

Theorem :The linear control system is controllable iff W (t0, t1) is invertible and the steering controlthat move x0 to x1 is given byu(t) = B∗(t)Φ∗(t1, t)W

−1(t0, t1)[x1 − Φ(t0, t1)x0]Proof : Controllability part is already proved earlier. We now show that the steering control definedabove actually does the tranfer of states. The controlled state is given by

x(t) = Φ(t, t0)x0 +

∫ t1

t0

Φ(t1, τ)B(τ)u(τ)dτ

x(t) = Φ(t, t0)x0 +

∫ t

t0

Φ(t, τ)B(τ)B∗(τ)Φ∗(t1, τ)W−1(t0, t1)[x1 − Φ(t0, t1)x0]dτ

x(t1) = Φ(t1, t0)x0 +W (t0, t1)W−1(t0, t1)[x1 − Φ(t0, t1)x0]

x(t1) = x1

Remark :Among all controls steering x0 to x1, the control defined above is having minimum L2-norm (energy). We will prove this fact later.Define a matrix Q given by

Q = [B|AB|....An−1B]

7

Page 8: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

It can be shown that Range of W (t0, t1) = Range of Q

Controllability of the linear system and the rank of Q are related by the following Kalman’s RankTest.Theorem ( Kalman’s Rank Condition) : If the matrices A and B are time - independent then

linear system (1) is controllable iff

Rank(B|AB|...|An−1B) = n

Proof : Suppose that the system is controllable.

Thus the operator C : L2(I, IRm)→ IRn defined by

Cu =

∫ t1

to

Φ(t1, τ)B(τ)u(τ)dτ

is onto.We now prove that

IRn = Range(C) ⊂ Range(Q).

Let x ∈ IRn then ∃u ∈ L2(I, IRn) such that∫ t1

to

eA(t1−τ)Bu(τ)dτ = x

Expand eA(t1−τ) by Cayley - Hamilton’s Theorem.∫ t1

to

[P0(0) + P1A+ P2A2 + ...+ Pn−1A

n−1]Bu(τ)dτ ] = x

=⇒ x ∈ Range[B|AB|A2B|..........|An−1B]

Conversely, Suppose that condition holds but system is not controllable. ie,Rank of W (t0, t0) 6= n

=⇒ ∃v 6= 0 ∈ IRnsuch thatW(t0, t1)v = 0

=⇒ vTW (t0, t1)v = 0∫ t1

to

vTΦ(t1, τ)BB∗Φ∗(t1, τ)vdτ = 0

=⇒∫ t1

to

||B∗Φ∗(t1, τ)v||2dτ = 0

=⇒ B∗Φ∗(t1, t)v = 0 t ∈ [t0, t1

=⇒ vTΦ(t1, t)B = 0 t ∈ [t0, t1]

vT eA(t1−t)B = 0 t ∈ [t0, t1]

Let t = t1, vTB = 0Differentiating vT eA(t1−t)B = 0 with respect to t and putting t = t1

−vTAB = 0............vTAn−1B = 0

=⇒ (v ⊥ Range[B|AB|....|An−1B])

8

Page 9: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

Hence Rank of Q 6= nRank condition is violated and thus we get a contradiction and thus the system is controllable.

Examples : Tank Problem: Model I.

d

dt

(x1x2

)=

(−α 0α −β

)(x1x2

)+

(10

)u

Q : [B : AB] =

[1 −α0 α

]Rank Q = 2 =⇒ System is controllable.Model - 2 :

d

dt

(x1x2

)=

(−α 0α −β

)(x1x2

)+

(01

)u.

Q =

[0 01 −β

];

rank(Q) = 1 6< 2=⇒ System is not controllable.

Computation of Steering Control :

Cu = w

CC∗v = w

where u = C∗v. The system is controllable iff

C is onto.

⇐⇒ C∗is1− 1.

⇐⇒ CC∗is1− 1.

⇐⇒ CC∗is invertible.

If CC∗ is invertible then

v = (CC∗)−1w

u = C∗(CC∗)−1w

is the steering control.Controllability Example :

Spring Mass System : Consider a spring mass system having unit mass and with spring constant1. By Newton’s law of motion we have the following differential equation.

k=1

m=1

y(t)

9

Page 10: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

y′′ + y = 0

Let x1 = y

x2 = y′

x1 = x2

x2 = y′′ = −y = −x1d

dt

(x1x2

)=

(0 1−1 0

)(x1x2

)A =

(0 1−1 0

)Transition Matrix by Laplace Transform Method :

We know that eAt = L−1{(sI − A)−1}

(sI − A) =

(s −11 s

)

(sI − A)−1 =1

s2 + 1

(s −11 s

)T=

(s

s2+11

s2+1−1s2+1

ss2+1

)L−1{sI − A}−1 = L−1

[s

s2+11

s2+1−1s2+1

ss2+1

]=

(cos t sin t− sin t sin t

)Another Way - Matrix Expansion:

eAt = I + At+A2

2!t2 +

A3

3!t3 + ..........

A2 =

(0 1−1 0

)(0 1−1 0

)= −I

A3 = −AA4 = I

A5 = A

eAt =

[1 00 1

] [0 tt 0

]+

[ −t22!

0

0 −t22!

]+

[0 −t3

3!−t33!

0

]+

[ −t44!

0

0 −t44!

]=

[1− t2

2!+ t4

4!+ ... t− t3

3!+ t5

5!− ...

−t+ t3

3!− ... 1− t2

2!+ t4

4!− ...

]=

[cos t sin t− sin t cos t

]

Let the initial state and the desired final states be given by

(x1(0)x2(0)

)=

(00

) (x1(T )x2(T )

)=

(1212

)Transition Matrix is given by,

Φ(T, t) =

(cos(T − t) sin(T − t)− sin(T − t) cos(T − t)

)10

Page 11: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

B =

(01

)Controllability Grammian is given by,

W (0, T ) =

∫ T

0

(sin(T − t)cos(T − t)

)(sin(T − t) cos(T − t)

)dt

=

(12(T − sin 2T ) 1

4(1− cos 2T )

14(1− cos 2T 1

2(T + 1

2sin 2T )

)W−1(0, T ) =

4

t2 − 12(1− cos 2T )

(T + 1

2sin 2T 1

4(cos 2T − 1)

14(cos 2T − 1) 1

2(T − 1

2sin 2T )

)The steering control is

u(t) =4

T 2 − 1/2(1− cos 2T )(0, 1)

(cos(T − t)− sin(T − t)

sin(T − t) cos(T − t)

)W−1(0, T )

(1212

)=

1

T 2 − 12(1− cos 2T )

{[(T − 1

2) cos(T − t) + sin(T − t)] +

1

2[cos(T + t)− sin(T + t)]}

Minimum norm control :We now prove that the steering control defined in the above discussionis actually an optimal control.

Theorem: The control function defined by u0 = B∗(t)Φ∗(t1, t)W−1(t0, t1)x1 is a minimum norm

control among all other controls steering the system from state x0 to state x1.That is, ||u0|| ≤ ||u|| for all other steering controllers u in L2(I, IR)

Proof:Let u = u0 + u− u0 and hence we have

||u||2 = ||u0 + (u− u0)||2

= < u0 + (u− u0), u0 + (u− u0) >= < u0, u0 > + < u0, u− u0 > + < u− u0, u0 > + < u− u0, u− u0 >= ||u0||2 + ||u− u0||2 + 2Re < u0, u− u0 >L2

Now,

< u0, u− u0 >L2 =

∫ t1

t0

< u0(t), u(t)− u0(t) >IRm dt

=

∫ t1

t0

< B∗(t)Φ∗(t1, t)W−1(t0, t1)x1, u(t)− u0(t) > dt

= < W−1(t0, t1)x1,

∫ t1

t0

Φ(t1, t)B(t)[u(t)− u0(t)]dt >

= < W−1(t0, t1)x1, x1 − x1 >= 0

Since both u and u0 are steering controllers.

11

Page 12: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

Thus||u||2 = ||u0||2 + ||u− u0||2

or ||u||2 − ||u0||2 = ||u− u0||2 ≥ 0

||u||2 ≥ ||u0||2

for all steering controllers u.Adjoint Equation : An equation having solution x in some inner product space is said to be adjointof an equation with solution p in the same inner product space if < x(t), p(t) > = constant.

Theorem : The adjoint equation associated with x = A(t)x is p(t) = −A∗(t)p

Proof :

d

dt< x(t), p(t) > = < x(t), p(t) > + < x(t), p(t) >

= < A(t)x, p(t) > + < x(t),−A∗(t)p(t) >= < x(t), A∗(t)p(t) > + < x(t),−A∗(t)p(t) >= < x(t), 0 >= 0

... < x(t), p(t) > = constant.Theorem : If Φ(t, t0) is the transition matrix of x(t) = A(t)x then Φ∗(t0, t) is the transition matrixof its adjoint system p = −A∗(t)p.Proof :

I = Φ−1(t, t0)Φ(t, t0)

0 =d

dtI =

d

dt[Φ−1(t, t0)Φ(t, t0)]

=d

dt[Φ−1(t, t0)]Φ(t, t0) + Φ−1(t, t0)Φ(t, t0)

= Φ(t0, t)Φ(t, t0) + Φ(t0, t)A(t)Φ(t, t0)

0 = [Φ(t0, t) + Φ(t0, t)A(t)]Φ(t, t0)

=⇒ Φ(t0, t) = −Φ(t0, t)A(t)

Φ∗(t0, t) = −A∗(t)Φ∗(t0, t)=⇒ Φ∗(t0, t) is the transition matrix of the adjoint system.

Remark : The system is self adjoint if A(t) = −A∗(t) and in this case

Φ(t, t0) = Φ∗(t0, t)

= Φ−1(t, t0)

Φ(t, t0)Φ∗(t, t0) = I

ObservabilityProblem of finding the state vector knowing only the output y over some interval of time[t0, t1].Consider the input free system

x(t) = A(t)x(t) (3)

with the observation equationy(t) = C(t)x(t),

12

Page 13: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

where C(t) = (cij(t))m×n matrix having entries as continuous functions of t.Let Φ(t, t0) be the transition matrix. The solution is x(t) = Φ(t, t0)x0Thus

y(t) = C(t)Φ(t, t0)x0 t0 ≤ t ≤ t1

Definition : System (3) is said to be observable over a time period [t0, t1] if it is possible to determineuniquely the initial state x(t0) = x0 from the knowledge of the output y(t) over [t0, t1].The complete state of the system is known if initial state x0 is known.

Define a linear operator

L : IRn → L2([t0, t1]; IRm)by

(Lx0)(t) = C(t)Φ(t, t0)x0

Thus,y(t) = (Lx0)(t) t ∈ [t0, t1]

The system is observable iff L is invertible.

Theorem :The following statements are equivalent.

1. The linear system x(t) = A(t)x(t), y(t) = C(t)x(t) is observable.

2. The operator L is 1-1.

3. The adjoint operator L∗ is onto.

4. The operator L∗L is onto.

Remark : L∗L : IRn → IRn is an n× n matrix called Observability Grammian.

Finding L∗ : L2 → IRn:

< (Lx0)(.), w(.) >L2(I,IRn) =

∫ t1

to

< C(t)Φ(t, t0)x0, w(t) >IRn dt

=

∫ t1

to

< x0,Φ∗(t, t0)C

∗(t)w(t) >IRn dt

= < x0,

∫ t1

to

Φ∗(t, t0)C∗(t)w(t)dt >IRn

= < x0, L∗w(.) >IRn

Thus,

L∗w =

∫ t1

to

Φ∗(t, t0)C∗(t)w(t)dt

Observability Grammian The observability Grammian is given by

M(t0, t1) = L∗L =

∫ t1

to

Φ∗(t, t0)C∗(t)C(t)Φ(t, t0)dt

13

Page 14: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

The linear system is observable if and only if the observability Grammian is invertible.

Kalman’s Rank Condition for Time Invariant System

If A and C are time-independent matrices,then we have the following Rank Condition for Observ-ability.Theorem : The linear system x(t) = Ax(t), y(t) = Cx(t) is observable iff the rank of the followingObservability matrix O

O =

CCACA2

...CAn−1

is n.Proof : The observation y(t) and its time derivatives are given by,

y(t) = CeA(t)x(0)

y1(t) = CAeAtx(0)

y2(t) = CA2eAtx(0)

......... ..................

yn−1(t) = CAn−1eAtx(0)

At t = 0, we have the following relation.

y(0) = Cx(0)

y1(0) = CAx(0)

y2(0) = CA2x(0)

......... ..................

yn−1(0) = CAn−1x(0)

The initial condition x(0) can be obtained from the equation.CCACA2

...CAn−1

x(0) =

y0(0)y1(0)y2(0)...

yn−1(0)

The initial state x(0) can be determined if the observability matrix on the left hand side has full rankn.Hence the system is observable if the Kalman’s Rank Condition holds true. Converse can be provedeasily(exercise).

14

Page 15: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

Reconstruction of initial state x0 : We have

y = Lx0

L∗y = L∗Lx0

x0 = (L∗L)−1L∗y

x0 = [M(t0, t1)]−1∫ t1

to

Φ∗(τ, t0)C∗(τ)y(τ)dτ

Duality Theorem :The linear system

x = A(t) +B(t)u (4)

is controllable iff the adjoint system

x = −A∗(t)y = B∗(t)x

}(5)

is observable.

Proof : If Φ(t, t0) is the transition matrix generated by A(t) then Φ∗(t0, t) is the transition matrixgenerated by −A∗(t).The system (5) is observable iff the observability Grammian

M(t0, t1) =

∫ t1

to

[Φ∗(t0, t)]∗(B∗)(t)∗B∗(t)Φ∗(t0, t)dt is non-singular

⇐⇒∫ t1

to

Φ(t0, t)B(t)B∗(t)Φ∗(t0, t)dt is non-singular

⇐⇒∫ t1

to

Φ(t1, t0)Φ(t0, t)B(t)B∗(t)Φ∗(t1, t0)Φ∗(t0, t)dt is non-singular

⇐⇒∫ t1

to

Φ(t1, t)B(t)B∗(t)Φ∗(t1, t)dt is non-singular

⇐⇒ W (t0, t1) is non-singular

⇐⇒ The system (4) is controllable.

Example :

d

dt

x1x2x3

=

−2 −2 00 0 10 −3 −4

x1x2x3

x = Ax

y(t) = [1, 0, 1]x(t). That is, y = Cx

O =

CCACA2

=

1 −2 40 −5 161 −4 11

15

Page 16: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

has rank 3.=⇒ (A,C) is observable.

Airplane Model(linear Model) :

horizontal

φ

α

Let us define the following variables: φ(t):pitch angle ≡ body of the plane inclined to an angle φwith the horizontal.α(t):Flight Path Angle: The path of the flight is along a straight line and it is at an angle α withthe horizontal.h(t): Altitude of the plane at time t.c: Plane flies at a constant non-zero ground speed c.w: Natural Oscillation frequency of the pitch angle.a, b: the constants.u(t): The control input u is applied to the aircraft by the elevators at the tail of the flight.α > 0 for ascending α < 0 for descending.Now the mathematical model of the system for small φ and α is given by

α = a(φ− α)

φ = −w2(φ− α− bu)

h = cα

Consider the variables:Let x1 = α,x2 = φ, x3 = φ, x4 = a

x1x2x3x4

=

−a a 0 00 0 1 0w2 −w2 0 0c 0 0 0

x1x2x3x4

+

00w2b0

u

Show that the system is controllable.Satellite Problem :

16

Page 17: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

r

θ

Earth

u1

u2

d

dt

x1x2x3x4

=

0 1 0 0

3w2 0 0 2w0 0 0 10 −2w 0 0

x1x2x3x4

+

0 01 00 01 1

( u1(t)u2

)

u1(t) - radial thrustersu2 - tangential thrusters

y(t) =

[1 0 0 00 0 1 0

]x(t)

Show that the system is observable.Only radial distance measurements are available: :

y1(t) =

1000

x(t) = C1x(t)

C1

C1AC1A

2

C1A3

=

1 0 0 00 1 0 0

3w2 0 0 2w0 −w2 0 0

has rank 3.Thus the system is not observable only with radistance measurements.Only measurements of angle are available :

y2 = [0, 0, 1, 0]x(t)

= C2x(t)

rank

C2

C2AC2A

2

C2A3

= 4

This implies that even with the measurement of angle alone the system is observable.Electrical Circuit Example :Consider the following circuit:

17

Page 18: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

R

C

x1 x2

RL

+

u

The state space representation is given by,(x1x2

)=

( −2RC

1C−1

L0

)(x1x2

)+

(1RC1L

)u(t)

Observation equation is given by

y(t) =[−1 0

]( x1x2

)+ u(t)

Q = [B|AB] =

[1RC

−2R2C2 + 1

LC1L

−1RLC

]The system is uncontrollable if det = 1

R2LC2 − 1L2C

= 0 iff R =√

LC

Observation Matrix is

O =

[CCA

]=

[−1 02RC

−1C

]It has full rank implies the observability of the system.

Observability Example : Consider the spring mass system considered earlier.Let the observability equation be given by

y(t) = [0, 1]

[x1x2

]C = [0, 1]

Observability Matrix O =

[CCA

]=

[1 00 1

]Rank is 2 =⇒ System is observable.Computation of initial state x0Let [t0, t1] = [−π, 0]

Φ(t,−π) =

(cos t sin t− sin t cos t

)(−1 00 −1

)−1=

(− cos t − sin tsin t − cos t

)

18

Page 19: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

CΦ(t,−π) = [0, 1]

[− cos t − sin tsin t − cos t

]= [− cos t− sin t]

W (0, π) =

∫ 0

−π

[− cos t− sin t

] [− cos t − sin t

]dt

=

∫ 0

−π

[cos2 t sin t cos t

cos t sin t sin2 t

]dt

2

(1 00 1

)Now using the reconstruction formula(

x1(−π)x2(−π)

)=

2

π

∫ 0

−π

(− cos t− sin t

)(12

cos t 12

sin t)dt

=1

π

∫ 0

−π

(− cos2 t − cos t sin t− sin t cos t sin2 t

)dt

=1

π

(−π

2−π

2

)=

(−1

2−1

2

)= x0 ∈ IR2

STABILITY

Consider the dynamical system

x = f(x, t) (6)

Let f(c, t) = 0 for all t, where c is some constant vector. Then it follows that if x(t0) = c thenx(t) = c, all t ≥ t0. Thus solutions starting at c remains there, and c is said to be an equilibriumor critical point (or state). Clearly, by introducing new variables x,i = xi − ci, we can transform theequilibrium point to the origin.We shall assume that f(0, t) = 0, t ≥ t0. We shall also assume that there is no other constant solutionin the neighbourhood of the origin, so that the origin is an isolated equilibrium point.

Example : The equilibrium points of the system described by

x1 = x1 − 2x1x2

x2 = −2x2 + x1x2

are (0, 0) and (2, 1/2). The above equation is an example of a predator-pray population model dueto Volterra and used in Biology.

19

Page 20: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

Definition

An equilibrium state x = 0 is said to be: Stable if for any positive scalar ε there exist a positivescalar δ such that

||x(t0)||e < δ ⇒ ||x(t)||e < ε, t ≥ t0

Asymptotically stable: if it is stable and if in addition x(t)→ 0 as t→∞.Unstable: if it is not stable; that is, there exists an ε > 0 such that for every δ > 0 there exist anx(t0) with ||x(t0)|| < δ and t1 > t0 such that ||x(t1)|| ≥ ε for all t > t1. If this holds for every x(t0)in ||x(t0)||e < δ this equilibrium is completely unstable.

The above definitions are called ’stability in the sense of Lyapunov Regarded as a function of t inthe n-dimensional state space, the solution x(t) of (6) is called a trajectory or motion.

In two dimensions we can give the definitions a simple geometrical interpretation. If the origin O isstable, then give the outer circle C with radius ε, there exists an inner circle C1 with radius δ1 suchthat trajectories starting within C1 never leaves C. If O is asymptotically stable then there is somecircle C2, radius δ2 having the same property as C1 but in addition trajectories starting inside C2

tends to O as t→∞.

Linear System Stability

Consider the system

x = Ax (7)

Theorem

The system (7) is asymptotically stable at X = 0 if and only if A is stability matrix, i.e. allcharacteristic roots λk of A have negative real parts. System (7) is unstable at x=0 if any <(λk) > 0;and completely unstable if all <(λk) > 0.

Proof: The solution of (7) subject to x(0) = x0 is

x(t) = exp(At)x0 (8)

with f(λ) = exp(λt) we have (using Sylvester’s formula)

exp(At) =

q∑k=1

(Zk1 + Zk2t+ Zk3t2 + .....+ Zkαkt

αk−1)exp(λkt)

where λk are the eigen values of A and αk is the power of the factor (λ−λk) in the minimal polynomialof A and Zkl are constant matrices determined entirely by A. Using properties of norms we obtain

||exp(At)|| ≤q∑

k=1

αk∑l=1

tl−1||exp(λkt)||||Zkl||

≤q∑

k=1

αk∑l=1

tl−1exp[Re(λk)t]||Zkl|| → 0 as t→∞

20

Page 21: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

provided <(λk) < 0. Since the above is a finite sum of terms, each of which → 0 as t → ∞. Hencefrom (8) we get

x(t) ≤ exp(At)x0 → 0

So the system is asymptotically stable. If any <(λk) is positive then it is clear from the expressionfor exp(At) that ||x(t)|| tends to infinity as t tends to infinity, so the origin is unstable.

Illustration of stable and unstable trajectories

Time Varying Systems

Consider the non autonomous systemx = A(t)x

where A(t) is a continuous n× n matrix. For a nonlinear time-varying system it can be shown thateven if all eigenvalues of A(t) have negative real parts for all t the system may be unstable.

Example: If A(t) =

[a cos2 t− 1 1− a sin 2t

2

−1− a sin 2t2

a sin2 t− 1

]the solution x(t) =

[e(a−1)t cos t e−t sin te(a−1)t sin t e−t cos t

] [x0y0

]with x(0) =

[x0y0

]It can be seen that x(t) → ∞ as t → ∞, if 1 < a < 2; even though the eigenvalues of A(t) area−22±√

(2−a2

)2 − (2− a) which have negative real part.

We give an example to show that instability criteria of the linear time invarying systems do not applyto linear time varying systems.

Consider the system x(t) = A(t)x(t) with

A(t) =

(−112

) + (152

) sin 12t (152

) cos 12t

(152

) cos 12t (−112

)− (152

) sin 12t

. (9)

The eigenvalues of A(t) are 2 and −13 for all t. The eigenvalue 2 has a positive real part. Howeverthe state transition matrix of A(t) in (9) is [?]

X(t, 0) =

12e−t(cos 6t+ 3 sin 6t) 1

6e−t(cos 6t+ 3 sin 6t)

+12e−10t(cos 6t− 3 sin 6t) +1

6e−10t(cos 6t− 3 sin 6t)

12e−t(3 cos 6t− sin 6t) 1

6e−t(3 cos 6t− sin 6t)

−12e−10t(3 cos 6t+ sin 6t) +1

6e−10t(3 cos 6t+ sin 6t)

Clearly ‖X(t, 0)‖ <∞ for all t and ‖X(t, 0)‖ → 0 as t→∞. So the system is asymptotically stable.

We conclude from above examples that stability and instability of linear time varying systems cannotbe determined from the eigenvalues of their system matrix A(t).

21

Page 22: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

Perturbed Linear Systems

Consider the perturbations of the systems x(t) = A(t)x(t) in the following form

x = A(t)x+B(t)x ; x(t0) = x0 (10)

where B(t) is an n× n continuous matrix defined on [0,∞) and satisfies the condition

limt→∞‖B(t)‖ = 0. (11)

Theorem

If limt→∞

A(t) = A, a constant matrix and if all the characteristic roots of A have negative real parts

and B(t) satisfies the condition (11) then all the solutions of the system (10) tend to zero as t→∞.

ProofThe solution of equation (10) is given by

x(t) = Φ(t, t0)x0 +

t∫t0

Φ(t, s)B(s)x(s)ds.

Since all the characteristic roots of A have negative real parts there exits positive constants M andα such that ‖Φ(t, t0)‖ ≤Me−α(t−t0), t ≥ t0. Further the condition (11) holds and B(t) is continuous.Hence there exists a constant b such that ‖B(t)‖ ≤ b. Hence

‖x(t)‖ ≤ ‖Φ(t, t0)x0‖+

t∫t0

‖Φ(t, s)B(s)x(s)‖ds

≤ M‖x0‖e−α(t−t0) +

t∫t0

Me−α(t−s)b‖x(s)‖ds.

Therefore,

‖x(t)‖eα(t−t0) ≤ M‖x0‖+Mb

t∫t0

eα(s−t0)‖x(s)‖ds

≤ M‖x0‖eMb(t−t0).

By Gronwall’s inequality

‖x(t)‖ ≤M‖x0‖e(Mb−α)(t−t0), t ≥ t0

and limt→∞‖x(t)‖ = 0 when we take Mb < α, which is always possible when t0 is sufficiently large.

Hence the theorem.

Nonlinear Perturbation

Consider the equation of the formx = Ax+ g(x) (12)

22

Page 23: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

where g(x) is very small compared to x and it is continuous. If g(0) = 0 then x(t) = 0 is anequilibrium solution of (12), we would like to determine whether it is stable or unstable. This isstated in the following theorem.

Theorem

Suppose that the function g(x)/‖x‖ is a continuous function of x which tends to zero as x → 0.Then the solution x(t) of (12) is asymptotically stable if the solution x(t) of the linearized equationx = Ax is asymptotically stable.

ProofWe know that any solution x(t) of (12) can be written in the form

x(t) = eAtx0 +

t∫0

eA(t−s)g(x(s))ds. (13)

We wish to show that ‖x(t)‖ tends to zero as t tends to infinity. Since x = Ax is asymptoticallystable all the eigenvalues of A have negative real part. Then we can find positive constants K andα such that ‖eAtx(0)‖ ≤ Ke−αt‖x(0)‖ and ‖eA(t−s)g(x(s))‖ ≤ Ke−α(t−s)‖g(x(s))‖. Moreover, fromour assumption that g(x)/‖x‖ is continuous and vanishes at x = 0, we can find a positive constantσ such that ‖g(x)‖ ≤ α‖x‖/2K if ‖x‖ ≤ σ. Consequently, eqution (13) implies that

‖x(t)‖ ≤ ‖eAtx(0)‖+

t∫0

‖eA(t−s)g(x(s))‖ds

‖x(t)‖ ≤ Ke−αt‖x(0)‖+α

2

t∫0

e−α(t−s)‖x(s)‖ds

as long as ‖x(s)‖ ≤ σ, 0 ≤ s ≤ t. Multiplying both sides by eαt gives

eαt‖x(t)‖ ≤ K‖x(0)‖+α

2

t∫0

eαs‖x(s)‖ds.

Applying Gronwall’s inequality we get

‖x(t)‖ ≤ K‖x(0)‖e−αt2 (14)

as long as ‖x(s)‖ ≤ σ, 0 ≤ s ≤ t. Now, if ‖x(0)‖ ≤ σ/K, then the inequality (14) guarantees that‖x(t)‖ ≤ σ for all t. Consequently the inequality (14) is true for all t ≥ 0 if ‖x(0)‖ ≤ σ/K. Finallywe observe that from (14), ‖x(t)‖ ≤ K‖x(0)‖ and ‖x(t)‖ approaches zero as t approaches infinity.Therefore the equation x(t) = 0 is asymptotically stable.

The motion of a simple pendulum with damping is given by

θ +k

mθ +

g

lsin θ = 0. (15)

23

Page 24: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

We know that

sin θ = θ − θ3

3!+ · · · = θ + f(θ)

where f(θ)θ→ 0 as θ → 0. This gives

θ +k

mθ +

g

lθ +

g

lf(θ) = 0.

Equation (15) is equivalent to (12) with x = (θ, θ)∗

A =

[0 1−g

lkm

],

and g(x) = (0,−glf(θ))∗. The matrix A has eigenvalues − k

m± ( k2

4m2 − gl)12 which have negative real

parts if k,m, g, l are positive and ‖g(x)‖‖x‖ → 0 as ‖x‖ → 0. Hence the pendulum problem (15) is

asymptotically stable.

Nonlinear Systems

Lyapunov function: We define a Lyapunov function V (x) as follows:

1. V (x) and all its partial derivatives ∂V∂xi

are continuous.

2. V (x) is positive definite, i.e. and V (0) > 0 for x 6= 0 in some neighborhood ||x|| ≤ k of theorigin.

3. the derivative of V

V =∂V

∂x1x1 +

∂V

∂x2x2 + .....+

∂V

∂xnxn

=∂V

∂x1f1 +

∂V

∂x2f2 + .....+

∂V

∂xnfn

is negative semidefinite i.e. (V (0)) = 0, and for all x in ||x|| ≤ k, V (x) ≤ 0.

Theorem

If for the differential equations (6) we can find a definite function V such that by virtue of the givenequations its derivative is either identically equal to zero or is semidefinite with the opposite sign ofV , then the unperturbed motion is stable.

Proof: Let us choose an arbitrary and sufficiently small positive number ε > 0 and construct thesphere

∑x2j = ε. Next, inside this sphere we construct the surface V . This is always possible because

V is a continuous function that is equal to zero at the origin. Now we choose a small enough δ sothat the sphere

∑x2j = δ lies inside the surface V = c with no points in common. Let us show that

an image pointM set in motion from the sphere δ never reaches the sphere ε. This will prove thestability of the motion.

24

Page 25: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

Without loss of generality we may assume that the function V is positive definite (if V < 0 we canconsider the function −V ). According to the hypothesis of the theorem, the derivative of V , by virtueof the equations of the perturbed motion, is either negative or identically equal to zero i.e. V ≤ 0.

Then from the obvious identity V −V0 =∫ t0V dt, where V0 is value of the function at the initial point

M0, we get V − V0 ≤ 0 which implies V ≤ V0.

From this inequality it follows that for t ≥ t0 the image point M is located either on the surfaceV = V0 = c1 for (V ≡ 0) or inside this surface. Thus an image point M set in motion from theposition M0 located inside or on the surface of the sphere δ never moves outside the surface V = c1,moreover, it can not reach the surface of the sphere ε. This proves the theorem.

Applications of Lyapunov theory to linear systems

Consider the real linear time invariant system

x = Ax (16)

We now show how Lyapunov theory can be used to deal directly with (16) by taking as a potentialLyapunov function the quadratic form

V = xTPx (17)

where P is a real symmetric matrix. The time derivative of V with respect to (16) is

V = xTPx+ xTPx

= xTATPx+ xTPAx

= −xTQx,

where

ATP + PA = −Q, (18)

and it is easy to see that Q is also symmetric. If P and Q are both positive definite then the system(16) is asymptotically stable. If Q is positive definite and P is negative definite or indefinite then inboth cases V can take negative values in the neighbourhood of the origin so (16) is unstable.

Theorem

The real matrix A is a stability matrix if and only if for any given real symmetric positive definite(r.s.p.d.) matrix Q the solution P of the continuous Lyapunov matrix equation (18) is also positivedefinite.

Notice that if would be no use choosing P to be positive definite and calculating Q from (18). Forunless Q turned out to be definite or semidefinite nothing could be inferred about asymptotic stabilityfrom the Lyapunov theorems. If A has complex elements then the above theorem still holds but withP and Q in (18) Hermitian, AT replaced by A∗.

25

Page 26: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

STABILIZABILITY

Stabilization

In this section we will discuss the linear time invariant system

x = Ax+Bu, x ∈ Rn, u ∈ Rm. (19)

If this represents the linearization of some operating plant about a desired equilibrium state then itis represented by x = 0, u = 0. Now it is well known that the uncontrolled (u = 0) homogeneoussystem

x = Ax

fails to be asymptotically stable. One of the tasks of the control analyst is to use the control u in sucha way as to remedy this situation. Because of simplicity for both implementation and analysis, thetraditionally favoured means for accomplishing this obsective is the use of a linear feedback relation

u = Kx (20)

where the control u(t) is determined as a linear function of the current state x(t). The problem nowbecomes that of choosing the m × n feedback matrix K, in such a way that modified homogenoussystem realized by substituting (20) into (19), that is,

x = (A+BK)x (21)

is such that A+BK has only eigenvalues with negative real parts.

The system (19) is called an open loop system while the modified system (21) is called a closed loopsystem.

One of the basic results in the control theory of constant coefficient linear system is that controllabilityimplies stabilizability, the latter being the property of (19) which admits the possibility of selectingK so that A+BK is stability matrix.

Definition

The linear time invariant control system (19) is stabilizable if there exists an m× n matrix K suchthat A+BK is a stability matrix.

Theorem

If the system (19) is controllable, then it is stabilizable.

Proof. Assume that the system (19) is controllable. Then the controllability grammian matrix

WT = W (0, T ) =

T∫0

e−AtBB∗e−A∗tdt (22)

is positive definite for T > 0. Define the linear feedback control law

u = −B∗W−1T x = KTx (23)

26

Page 27: Controllability, Observability, Stability and ...neela/CIMPA/notes/CIMPA_RKG.pdf · Controllability, Observability, Stability and Stabilizability of Linear Systems Raju K George,

it can be shown that u stabilizes (19). Now we compute

AWT +WTA∗ =

T∫0

[Ae−AtBB∗e−A∗t + e−AtBB∗e−A

∗tA∗]dt

= −T∫

0

d

dt[e−AtBB∗e−A

∗t]dt

= −e−ATBB∗e−A∗T +BB∗. (24)

Since WT is positive and symmetric for T > 0, we can modify (24) as follows

(A−BB∗W−1T )WT +WT (A−BB∗W−1

T )∗ +BB∗

+e−ATBB∗e−A∗T = 0.

Now

BB∗ + e−ATBB∗e−A∗T ≥ BB∗

and, since WT is positive, we see that (A− BB∗W−1T )∗, and hence A− BB∗W−1

T itself is a stabilitymatrix.

References :

1. R.W. Brockett: Finite Dimensional Linear Systems Willey, New York,1970.

2. F.M. Callier & C.A. Desoer: Linear System Theory, Narosa Publishing House, New Delhi, 1991.

3. C.T. Chen: Linear System Theory and Design, Sauders College Publishing, New York, 1984.

4. M.C. Joshi: Ordinary Differential Equations: Modern Perspective, Alpha Science Intl Ltd,2006.

5. E.D. Sontag, Mathematical Control Theory: Deterministic Finite Dimensional Systems,Springer,New York, 1998

27