Matrices and Linear Algebra in Control Applications

Embed Size (px)

Citation preview

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    1/38

    Linear Algebra

    and Control ApplicationsNotes

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    2/38

    Synopsis

    Modern control approach using state space method requires extensive usage of ma-trices and deep understanding of linear algebra.

    1

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    3/38

    Contents

    Synopsis 1

    1 Introduction 1

    1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    1.2 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    1.3 Linear Algebra Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.3.1 Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.3.2 Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Linear Combination and Vector Spaces . . . . . . . . . . . . . . . . . . . 3

    2 Matrix Algebra 4

    2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2.2 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2.3 Matrices and Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . 6

    2.4 Linear Independence and Matrices . . . . . . . . . . . . . . . . . . . . . . 6

    2.4.1 Linear Dependence/Independence of Vectors and Determinants . 7

    2.5 Vector Span and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    2.5.1 Column Space and Row Space . . . . . . . . . . . . . . . . . . . . 7

    2

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    4/38

    2.6 Basis and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    2.6.1 Pivot Rows and Columns of a Matrix and Basis . . . . . . . . . . 8

    2.7 Dimension, Rank and Matrices . . . . . . . . . . . . . . . . . . . . . . . 8

    2.7.1 Dimension of Whole Matrix . . . . . . . . . . . . . . . . . . . . . 8

    2.7.2 Dimension of Upper Triangular Matrix . . . . . . . . . . . . . . . 9

    2.7.3 Dimension of Diagonal Matrix . . . . . . . . . . . . . . . . . . . . 9

    2.7.4 Dimension of Symmetric Matrix . . . . . . . . . . . . . . . . . . . 9

    2.8 The Null Space and the Four Subspaces of a Matrix . . . . . . . . . . . . 9

    2.9 The Complete Solution to Rx = 0 . . . . . . . . . . . . . . . . . . . . . . 10

    2.9.1 Row Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    2.9.2 Column Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    2.9.3 Dimension of Null Space . . . . . . . . . . . . . . . . . . . . . . . 11

    2.9.4 The Complete Solution to Rx = 0 . . . . . . . . . . . . . . . . . . 12

    2.10 The Complete Solution to Ax = 0 . . . . . . . . . . . . . . . . . . . . . . 132.11 The Complete Solution to Ax = b . . . . . . . . . . . . . . . . . . . . . . 14

    2.11.1 Full Row Rank(r = m);Full Column Rank(r = n) . . . . . . . . . 15

    2.11.2 Full Row Rank (r = m);Column Rank Less (r < n) . . . . . . . . 16

    2.11.3 Full Column Rank (r = n);Row Rank Less (r < m) . . . . . . . . 18

    2.11.4 Row Rank and Column Rank Less (r < m; r < n) . . . . . . . . . 18

    2.12 Orthogonality of the Four Subspaces . . . . . . . . . . . . . . . . . . . . 18

    2.13 Pro jections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    2.13.1 Projection Onto a Line . . . . . . . . . . . . . . . . . . . . . . . . 20

    2.13.2 Projection with Trigonometry . . . . . . . . . . . . . . . . . . . . 22

    2.13.3 Projection Onto a Subspace . . . . . . . . . . . . . . . . . . . . . 24

    3

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    5/38

    2.14 Least Squares Solution to Ax = b . . . . . . . . . . . . . . . . . . . . . . 25

    3 Matrices and Determinants 31

    3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    3.2 Determinant of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . 31

    4 Solution to Dynamic Problems with Eigen Values and Eigen Vectors 32

    4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    4.2 Linear Differential Equations and Matrices . . . . . . . . . . . . . . . . . 32

    5 Matrices and Linear Transformations 33

    5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    5.2 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    4

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    6/38

    Chapter 1

    Introduction

    1.1 Overview

    Control systems engineering can be divided into two types of approaches namely the

    classical or traditional control wherein transfer functions (ratio of Laplace transforms of

    output to input) are used and the modern control approach wherein state space methods

    are used.

    The state space approach has several advantages over the transfer function approach.

    Chief among them are that they are computationally straightforward and simple to use

    in multivariable control problems as well as non-linear control problems.

    State space methods use matrices extensively and require a thorough understanding

    of linear algebra.

    1.2 Literature Survey

    The books - Linear Algebra and its Applications and Introduction to Linear Algebra

    - by Gilbert Strang and his video lectures in the MIT website give a very good insight

    into linear algebra theory. These notes are based on the above books and video lectures.

    1

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    7/38

    1.3 Linear Algebra Basics

    In the study of linear system of equations, the vector space over a field is an important

    definition and stems from linear algebra.

    1.3.1 Field

    A field, F, is a set of elements called scalars, together with the two operations of addition

    and multiplication for which the following axioms hold:-

    (1) For any pair of elements a, b F, there is a unique sum a + b F and a uniqueproduct a.b F. Further by the law of commutativity, a + b = b + a and a.b = b.a.

    (2) For any three elements a, b and c F, the associative laws a + (b + c) = (a + b) + c

    and a(bc) = (ab)c as well as the distributive law a(b + c) = ab + ac hold good.

    (3) F contains the zero element denoted by 0 and the unity element denoted as 1,

    such that a + 0 = a and a.1 = a for ever a F.

    (4) F contains the additive inverse, i.e., for every a F there exists an element b F

    such that a + b = 0.

    (5) F contains the multiplicative inverse, i.e., for every a F there exists an element

    b F such that a.b = 1.

    1.3.2 Vector Space

    A vector space or linear vector space or linear space over the field, F, is a set V of

    elements called vectors (also denoted as V(F)), together with two operations of vector

    addition and scalar multiplication for which the following axioms hold:-

    (1) For any pair of vectors x, y V, there is a unique sum x + y V. Further by the

    law of commutativity, x + y = y + x.

    (2) For any vector x V, and scalar F, there is always a unique product x V.

    2

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    8/38

    (3) For any three vectors x, y and z V, the associative law x + (y + z) = (x + y) + z

    holds good.

    (4) For any two vectors x and y V, and scalar F, the distributive law(x + y) = x + y holds good.

    (5) For any two scalars and F, and vector x V, associative law (x) = x

    and the distributive law ( + )x = x + x hold good.

    (6) V contains the zero or null vector, denoted by 0, such that x + 0 = x for every

    x V.

    (7) V contains the unity vector, denoted by 1, such that x.1 = x for every x V.

    (8) For every x V, there exists an element x V such that x + (x) = 0.

    1.4 Linear Combination and Vector Spaces

    Linear algebra is based on two operations on vectors which define the vector space.

    They are vector addition and scalar multiplication. If c and d are scalars F and u

    and v are two vectors V then linear combination of u and v is defined as cu + dv.

    3

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    9/38

    Chapter 2

    Matrix Algebra

    2.1 Introduction

    In this chapter, an understanding of matrices is developed from linear algebra basics.

    2.2 Matrices

    We can form linear combination of vectors using matrices. For example, let three

    vectors u, v and w be given as

    Example:

    u =

    1

    1

    0

    ; v =

    0

    1

    1

    ; w =

    0

    0

    1

    ;

    Their linear combination in three-dimensional space can be given by cu + dv + ew

    i.e.,

    c

    1

    1

    0

    + d

    0

    1

    1

    + e

    0

    0

    1

    =

    c

    d c

    e d

    (2.1)

    4

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    10/38

    The above linear combination can be rewritten using matrices as

    1 0 0

    1 1 00 1 1

    c

    de

    = c

    d ce d

    (2.2)i.e., the matrix times vector can be given as

    Ax =

    u v w

    c

    d

    e

    = cu + dv + ew (2.3)

    where the scalars c,d and e are defined as the components of the vector x.

    Thus the rewriting of the linear combination in matrix form has brought about a

    crucial change in view point explained as follows:-

    (a) At first the scalars c,d and e were multiplying the vectors u, v and w to form the

    linear combination cu + dv + ew.

    (b) In the matrix form, the matrix, A, is multiplying the scalars as

    Ax =

    u v w

    c

    d

    e

    = cu + dv + ew (2.4)Thus matrix, A, acts on the vector, x.

    The result of the matrix multiplication of vector x, i.e., Ax can be defined as the

    column vector b expressed as

    b = Ax (2.5)

    Linear combinations are the key to linear algebra and the output Ax is a linear

    combination of the columns of A. Thus matrices can be said to be made up of row

    vectors and column vectors. An m n matrix can be said to be made up of m row

    vectors of n elements each or n column vectors of m elements each.

    5

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    11/38

    2.3 Matrices and Linear Equations

    Matrices and linear algebra concepts can be very useful in solving a system of linear

    equations. For the example discussed above where

    Ax =

    u v w

    c

    d

    e

    = b (2.6)

    from linear algebra point of view, we were interested in computing the linear combination

    cu + dv + ew to find b.

    In case of linear equations, we will consider c, d and e to represent the elements x1, x2

    and x3 of the column vector x and consider the problem to be that of finding whichcombination of u, v and w produces a particular vector b, i.e., to find the input x that

    gives the desired output, b = Ax. Thus Ax = b can be seen as a system of linear

    equations that has to be solved for x1, x2 and x3.

    2.4 Linear Independence and Matrices

    A set of vectors v1, v2,...,vn is said to be linearly independent if their linear combi-

    nation, say, 1 v1 + 2 v2 + ..... + n vn where 1, 2, ....., n are scalars equals the sero

    vector if and only if

    1 = 2 = ... = n = 0 (2.7)

    Even of one i, i = 1, 2,...,n is non-zero, then the set is linearly dependent.

    The columns of a matrix, A, can be considered as column vectors. Thus the columns

    of A are linearly independent when the only solution to Ax = 0 is x = 0.

    If x = [x1 x2 ... xn]T and

    A =

    v1 v2 . . . vn

    =

    a11 a12 . . . a1n

    a21 a22 . . . a2n...

    ......

    ...

    an1 an2 . . . ann

    (2.8)

    then v1, v2, ..., vn are linearly independent ifx1v1 + x2v2 + ...xnvn = 0 and if and only

    if x1 = x2 = ..... = xn = 0.

    6

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    12/38

    2.4.1 Linear Dependence/Independence of Vectors and Deter-

    minants

    The linear dependence or independence of the columns of a matrix is an important

    property and it can be verified by calculating the determinant of the matrix, A. If the

    determinant |A| is zero, i.e., matrix A is singular, then the column vectors are linearly

    dependent, i.e.,if Ax = 0 and |A| = 0, then x = 0 and hence dependent.

    If the determinant |A| is not zero, i.e., matrix A is invertible then the column vectors

    are linearly independent i.e., Ax = 0 and |A| = 0 implies x = 0 and hence the column

    vectors are independent.

    2.5 Vector Span and Matrices

    A set of vectors is said to span a vector space or linear space if their linear combi-

    nations fill that space. The set of vectors may be dependent or independent. In other

    words, if the given set of vectors are enough to produce the rest of the vectors in that

    space, then that set is said to span its vector space.

    2.5.1 Column Space and Row Space

    The columns of a matrix span its column space, i.e., the columns are considered as

    vectors and the column space is the vector space consisting of linear combinations of

    these column vectors. In a similar manner, the rows of a matrix span its row space. The

    row space of the matrix A is the column space of its transpose, i.e., AT.

    2.6 Basis and Matrices

    A basis for a vector space is a set of vectors with two properties:-

    (a) The basis vectors are linearly independent.

    (b) They span the vector space.

    7

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    13/38

    This combination of properties is fundamental to linear algebra. Every vector, v,

    in the space is a combination of basis vectors, because they span the space. Most

    important is that the combination of the basis vectors that form v is unique since these

    basis vectors, say, v1, v2, ..., vn are independent. Thus there is one and only one way to

    write v as a combination of the basis vectors.

    The columns of every invertible n n matrix form a basis for n. In other words, the

    vectors v1, v2, ..., vn are a basis of n exactly when they are the columns of an n n

    invertible matrix. Thus n has infinitely many different bases.

    If the set of vectors are exactly the number required to produce the rest of the vectors

    in the space (neither too many nor too few), then they form a basis.

    2.6.1 Pivot Rows and Columns of a Matrix and Basis

    The pivot columns of matrix A are a basis for its column space. The pivot rows ofA are

    a basis for its row space. The pivot rows and columns of the echelon form R (reduced

    form) also form a basis for its respective row space and column space.

    The columns of the n n identity matrix gives the standard basis for n.

    2.7 Dimension, Rank and Matrices

    The dimension of a vector space is the number of basis vectors in that space.

    The rank of the matrix is the dimension of the column space. The rank is also the

    dimension of the row space of that matrix.

    2.7.1 Dimension of Whole Matrix

    The dimension of a whole n n matrix space is n2.

    For a 2 2 matrix considering the basis, i.e.,

    A1, A2, A3, A4 =

    1 0

    0 0

    ,

    0 1

    0 0

    ,

    0 0

    1 0

    ,

    0 0

    0 1

    (2.9)

    8

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    14/38

    the dimension is 4.

    2.7.2 Dimension of Upper Triangular Matrix

    The dimension of the subspace of upper triangular matrices is 12

    n2 + 12

    n.

    For the example basis in eqn.(2.9) above, A1, A2 and A4 form a basis for upper

    triangular matrices and hence dimension is 3.

    2.7.3 Dimension of Diagonal Matrix

    The dimension of the subspace of diagonal matrices is n.

    For the example basis in eqn.(2.9) above, A1 and A4 form a basis for diagonal matrices.

    Hence dimension is 2.

    2.7.4 Dimension of Symmetric Matrix

    The dimension of subspace of symmetric matrices is 12

    n2 + 12

    n.

    For the example basis in eqn.(2.9) above, A1, A2, A4 (or) A1, A3, A4 form a basis for

    symmetric matrices. Hence dimension is 3.

    2.8 The Null Space and the Four Subspaces of a

    Matrix

    The null space of a given matrix A consists of all solutions to Ax = 0. It is denotedby N(A).

    One immediate solutions to Ax = 0 is x = 0. For invertible matrices, this is the only

    solution. For other matrices that are not invertible, i.e., singular matrices, there are

    non-zero solutions to Ax = 0. Each solution x belongs to the null space, N(A).

    The m n matrix A can be square or rectangular. The solution vector x has n

    9

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    15/38

    components in this case. Thus they are vectors in n. So the null space is a subspace

    of n.

    The column space of a m n matrix A denoted by C(A) is a subspace of Rem

    . Therow space of a m n matrix A denoted by C(AT) is a subspace of n.

    The matrices A and AT are usually different. Their column spaces and null spaces

    are also different.

    For the m n matrix A, the left null space consists of all solutions to ATy = 0. It is

    denoted by N(AT). The left null space is a sub space of m since the solution vector y

    has m components in this case.

    2.9 The Complete Solution to Rx = 0

    The matrix A can be generally reduced to its row echelon form R so that the four

    spaces can be easily identified. This is because the four dimensions for the four subspaces

    are the same for A and R. The basis for each subspace is found and its dimension is

    checked as follows:-

    Consider the following example of 3 5 matrix in reduced form R:-

    R =

    1 2 5 0 7

    0 0 0 1 3

    0 0 0 0 0

    (2.10)

    The pivot rows in the above matrix are 1 and 2 and the pivot columns are 1 and 4. Thus

    the rank of this matrix R is 2 since there are two pivots and the two pivots form a 2 2

    identity matrix.

    2.9.1 Row Space

    The row space of the above matrix R contains combinations of all three rows. However

    the third row contains only zeros. The first two non-zero rows are independent and form

    a basis since they span the row space C(RT). The dimension of the row space and its

    rank is 2.

    10

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    16/38

    2.9.2 Column Space

    The column space of the above matrix R contains combinations of all five columns.

    However the pivot columns are 1 and 4 and form a basis for the column space C(R).

    They are independent since they form the 2 2 identity matrix. All other columns are a

    linear combination of these two columns. Thus columns 1 and 4 span the column space.

    Also, the combination of 1 and 4 columns does not give a zero column. Hence they are

    independent and form a basis for the column space C(R). The dimension of the column

    space is the rank 2. This is the same as the row space since both the pivot rows and

    pivot columns form a 2 2 identity matrix whose rank is 2.

    2.9.3 Dimension of Null Space

    In the above example, the number of columns (n) is 5 and rank (r) is 2. Hence the null

    space has a dimension given by n r = 5 2 = 3. Thus there are three free variables.

    Here the columns 2, 3 and 5 are free and yield three special solutions to Rx = 0. Thus

    the null space N(R) has dimension n r.

    From inspection of the matrix R, it is found that

    (a) Columns 1 and 4 are linearly independent and form a 2 2 identity matrix when

    placed together. Thus we can solve for x1 to x5 as x1 = x2 = ... = x5 = 0 which is

    a unique solution.

    (b) Column 2 is two times column 1. The solution vector for satisfying Rx = 0

    corresponding to column 2 is (2, 1, 0, 0, 0). Thus this is a special solution for

    x2 = 1.

    (c) Column 3 is five times column 1. The solution vector for satisfying Rx = 0

    corresponding to column 3 is (5, 0, 1, 0, 0). Thus this is a special solution forx3 = 1.

    (d) Column 5 is seven times column 1 plus three times column 4. The solution vector

    for satisfying Rx = 0 corresponding to column 5 is (7, 0, 0, 3, 1). Thus this is a

    special solution for x5 = 1.

    11

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    17/38

    (e) Thus the null space consists of the above three solution vectors i.e.,

    N(R) =

    2

    10

    0

    0

    ,

    5

    01

    0

    0

    ,

    7

    00

    3

    1

    (2.11)

    These solution vectors are independent and hence form a basis. All solutions to

    Rx = 0 are linear combinations of these three column vectors.

    2.9.4 The Complete Solution to Rx = 0

    Thus the complete solution for Rx = 0 for the given example matrix R can be calculated

    in two steps:-

    (i) For the column vectors 1 and 4 of the matrix R which are independent, the solution

    vector is unique and is given by x1 = x2 = ... = x5 = 0.

    (ii) For the column vectors 2, 3 and 5 which are dependent (linear combinations of 1

    and/or 4), there can be an infinite number of solutions. For the given example,three solutions which form a basis for satisfying Rx = 0 are as follows:-

    (aa) If x2 = 1 is chosen, then x1 = 2 and x3 = x4 = x5 = 0. Thus all linear

    combinations which satisfy x1 = 2x2 and x3 = x4 = x5 = 0 are solutions to

    Rx = 0. This corresponds to

    x =

    2

    1

    0

    0

    0

    (2.12)

    (ab) If x3 = 1 is chosen, then x1 = 5 and x2 = x4 = x5 = 0. Thus all linear

    combinations which satisfy x1 = 5x3 and x2 = x4 = x5 = 0 are solutions to

    12

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    18/38

    Rx = 0. This corresponds to

    x =

    5

    01

    0

    0

    (2.13)

    (ac) If x5 = 1 is chosen, then x1 = 7, x4 = 3 and x2 = x3 = 0. Thus all

    linear combinations which satisfy x1 = 7x5, x4 = 3x5 and x2 = x3 = 0 are

    solutions to Rx = 0. This corresponds to

    x =

    7

    0

    0

    3

    1

    (2.14)

    2.10 The Complete Solution to Ax = 0

    The dimensions of the four subspaces for a matrix A are the same as its reduced formR. A can be reduced to R through an elimination matrix E. This invertible elimination

    matrix E is a rpoduct of elementary matrices such that R = EA and A = E1A.

    For the reduced form R given by

    R =

    1 2 5 0 7

    0 0 0 1 3

    0 0 0 0 0

    (2.15)

    the matrix A can be chosen such that it reduces to R as

    A =

    1 2 5 0 7

    0 0 0 1 3

    2 4 10 1 17

    (2.16)

    It can be seen from the above matrices A and R that the following properties hold:-

    13

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    19/38

    (a) A has the same row space as R. Thus the dimensions are also the same and the

    basis is also the same. This is because every row of A is a combination of the rows

    of R. Elimination changes rows but not row spaces.

    (b) The matrix A may not have the same column space as R. This is because columns

    of R may end in zeros but columns of A may not end in zeros. However for every

    matrix, it is necessary that

    the number of independent rows equals the number of independent columns.

    Thus the dimensions of A and R are the same though the column spaces are

    different. This is because the number of pivot columns (which are independent)

    are the same for A and R. Thus the pivot columns of A are a basis and form itscolumn space.

    (c) The matrix A has the same null space as R and hence the same dimension and

    same basis. This is because the elimination steps for reducing A to R do not

    change the solutions to Ax = 0.

    (d) The dimension of the left null space of A i.e., the null space of AT is the same as

    the left null space of R since the dimensions of the column space of R and AT are

    the same.

    For the mn matrix A, the column space has dimension, say, r, then the dimension

    of its null space is n r. Thus the whole space is n such that r + (n r) = n.

    Similarly, for the transpose of A i.e., the n m matrix AT, the column space has

    the same dimension r while the dimension of its null space is m r. Thus the

    whole space is m such that r + (m r) = m.

    The Fundamental Theorem of Linear Algebra for a mn matrix A is given as follows:-

    The column space and row space both have dimension r for A as well as AT. The null

    spaces have dimensions n r for A and m r for AT.

    2.11 The Complete Solution to Ax = b

    The solution to Ax = b when b = 0 was dealt with in the previous sections.

    Elimination operation reduced the matrix A to its reduced form R and converted the

    14

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    20/38

    problem to solving Rx = 0. The free variables were assigned values one and zero so that

    the pivot variables were found by back substitution. Since x = 0 is one of the solutions

    when b = 0, the solution x was in the null space of A.

    When the right hand side of Ax = b, i.e., b is not zero then the solution can be

    separated into two parts as x = xp + xn where x is known as the complete solution, xp

    is known as the particular solution and xn is known as the null space solution. There

    are four possiblities for the complete solution of Ax = b depending on the rank r which

    are discussed as follows with the consideration that A is a m n non-zero matrix.

    2.11.1 Full Row Rank(r = m);Full Column Rank(r = n)

    This condition occurs when A is a non-singular square matrix. Thus A is invertible and

    Ax = b has exactly one solution. The Null space N(A) contains the zero vector only.

    The column vector b is a linear combination of one or more column vectors of the matrix

    A. In this case, the complete solution x = xp + 0.

    Example 1

    Let

    A =

    1 0 0

    0 2 0

    0 1 1

    (2.17)

    and

    b =

    1

    6

    1

    (2.18)

    A is a non singular matrix since its determinant(|A| = 2 = 0). Thus the solution can be

    found from x = A1b as given below:-

    x =

    1 0 0

    0 0.5 0

    0 0.5 1

    1

    6

    1

    =

    1

    3

    2

    (2.19)

    Thus the solution is unique (only solution) given by x = [1 3 2]T.

    15

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    21/38

    2.11.2 Full Row Rank (r = m);Column Rank Less (r < n)

    In this case, Ax = b has infinite number of solutions. Every matrix A with full row

    rank i.e., r = m satisfies the following properties:-

    (a) All rows have pivots and the reduced form R has no zero rows.

    (b) There is a solution to Ax = b for any and every right hand side b.

    (c) The null space N(A) contains the zero vector x = 0 and also solution vectors

    n r = n m such that Ax = 0. Thus the null space is a basis consisting ofn m

    non-zero vectors. There can be infinite number of solution vectors which are the

    linear combinations of these non-zero solution vectors which form the basis.

    Example 2

    Let

    A =

    1 5 0 3

    0 0 1 2

    1 5 1 5

    (2.20)

    and

    b =

    1

    3

    4

    (2.21)

    To solve this easily, the system of equations Ax = b is reduced to a simpler system

    Rx = d by using the augmented matrix [A b] (wherein b is added as an extra column

    to the matrix A) and applying elimination steps to this augmented matrix.

    Thus for the above system, the augmented matrix is given by

    [A b] =

    1 5 0 3 10 0 1 2 31 5 1 5 4

    (2.22)Applying the elimination step R3 R3 R1 gives

    [R b] =

    1 5 0 3 1

    0 0 1 2 3

    0 0 1 2 3

    (2.23)

    16

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    22/38

    Applying the elimination step R3 R3 R2 gives

    [R b] =

    1 5 0 3 1

    0 0 1 2 30 0 0 0 0

    (2.24)Let the solution vector x = [x1 x2 x3 x4]

    T. The pivot columns of R are 1 and 3 while

    the free columns are 2 and 4. Selecting the variables against the free columns as x2 and

    x4 to be equal to zero, back-substitution gives x3 = 3 and x1 = 1. Thus the unique

    solution or particular solution for Ax = b can be given by [1 0 3 0]T for the given by

    b.

    However since the rank of the matrix A is 2 while the number of columns is 4, the

    null space N(A) contains n r = 4 2 = 2 vectors apart from the zero vector. These

    two vectors form the basis of the null space and their linear combinations give an infinite

    number of solutions to Ax = b.

    For the given Ax = b, the null space is found to contain the following vectors which

    form a basis by reversing the signs of 5, 3 and 2 in the free columns of R:-

    (x2, x4)n =

    5

    1

    0

    0

    ,

    3

    0

    2

    1

    (2.25)

    Thus the complete solution to Ax = b in this case is given by x = xp + xn where

    xp is the particular solution which solves Axp = b and xn is the n r special solutions

    which solve Axn = 0 for the given example, the complete solution is given by

    x =

    5

    1

    0

    0

    + x2

    5

    1

    0

    0

    + x4

    3

    0

    2

    1

    (2.26)

    As can be seen from the above system of equations, Ax = b comprises three planes

    in the x1 x2 x3 x4 space. The first and third planes are parallel to each other. The first

    17

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    23/38

    and second planes are not parallel and intersect at a point along two lines. Adding the

    null space vectors xn results in the solution moving along the two lines and hence the

    plane containing the two lines. Then x = xp + xn gives the complete solution.

    2.11.3 Full Column Rank (r = n);Row Rank Less (r < m)

    In this case, Ax = b has nil or one solution. Every matrix A with full column rank i.e.,

    r = n satisfies the following properties:-

    (a) All columns of the matrix A are the pivot columns.

    (b) Thus the null space contains the zero vector x = 0 only.

    (c) If Ax = b has a solution then it has only one solution. Otherwise there is no

    solution.

    Example 3

    TO BE CONTINUED

    2.11.4 Row Rank and Column Rank Less (r < m; r < n)

    In this case, Ax = b has nil solution or infinite number of solutions. In practice, this

    situation is least possible or rarely occurs.

    2.12 Orthogonality of the Four Subspaces

    Two vectors are said to be orthogonal to each other if their dot product is zero, i.e.,

    v.w = 0 or vTw = 0. Ifv and w are the sides of a right angles triangle, then

    ||v||2 + ||w||2 = ||v + w||2

    The right hand side of the above expression can also be written as

    ||v + w||2 = (v + w)T(v + w)

    18

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    24/38

    This equation will be

    ||v + w||2 = vTv + wTw

    if and only if vT

    w = wT

    v = 0 which gives the proof of orthogonality.

    The Fundamental Theorem of Linear Algebra consists of two parts:-

    (a) Part-I. The row and column spaces have the same dimension r and the two null

    spaces N(A) and N(AT) have the remaining dimensions n r and m r.

    (b) Part-II. The row space is perpendicular to the null space ofA, i.e., N(A) while the

    column space is perpendicular to the null space of AT, i.e., N(AT). Thus the row

    space C(AT) and null space of A, N(A) are orthogonal subspaces over n while

    the column space, C(A) and null space of AT, N(AT), are orthogonal subspaces

    over m.

    When b is not in the column space of A then we cannot get a direct solution for

    Ax = b. In that case, the null space of AT is considered for computing the least squares

    solution of Ax = b since e = b Ax which will be in the null space of AT is used for

    computing this least squares solution.

    2.13 Projections

    Projection of a vector, say b, onto a line is the part of b along that line. Similarly the

    projection of a vector b onto a plane is the part of b in that plane.

    If p is the projection vector then it is given by p = Pb where P is the projection

    matrix.

    For example, considering a standard three-dimensional plane xyz, if a point in this

    plane is described by the vector, b = (x1, y1, z1), then the projection of b along the

    z-axis can be found with the help of the projection matrix given by

    P =

    0 0 0

    0 0 0

    0 0 1

    (2.27)

    19

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    25/38

    The projection of b onto the xy plane can be found with the help of the projection

    matrix

    P =

    1 0 00 1 00 0 0

    (2.28)

    Every subspace of m has its own m m projection matrix.

    The projection matrix P of a subspace is best described by its basis. Thus the columns

    of P are the basis vectors of the subspace. Thus the projection P of any vector b onto

    the column space of any m n matrix. In case of a line, the dimension is one and the

    matrix P has only one column. For example

    P1 =

    0

    0

    1

    (2.29)

    gives the z-axis. Similarly, in case of a two dimensional xy plane,

    P2 =

    1 0

    0 1

    0 0

    (2.30)

    The z-axis (line) and the xy plane are orthogonal complements since their dimensions

    add up to three and the sum of projection matrices P1 and P2 is identity matrix, i.e.,

    P1 + P2 = I. Thus b = P1b + P2b = (P1 + P2)b = Ib.

    2.13.1 Projection Onto a Line

    The projection, p of any point on a line b on another line a is given by the product ofa scalar x and a, i.e., p is a multiple of a.

    The key to projection is orthogonality wherein the perpendicular to the vector a

    joined to b decides the error vector e given by

    e = b xa (2.31)

    20

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    26/38

    Since this line is perpendicular to a, we can determine the scalar, x from e from the

    fact that e is perpendicular to a when their dot product is zero. Thus

    a.e = 0 => a.(b xa) = 0 => a.b xa.a = 0 => x = a.ba.a

    = aT

    baTa

    The transpose applies to matrices.

    Thus ifb = a then x = 1 and the projection ofa onto a is itself. Ifb is perpendicular

    to a then aTb = 0, the projection p = 0 i.e., x = 0.

    Example 1

    Problem : Project the vector b = [1 1 1]T onto another vector given by a = [1 2 2]T.

    Solution : Here we need to find the projection of b on a given by p = xa. From the

    formula given above,

    x =aTb

    aTa(2.32)

    x =5

    9

    Hence, the projection vector p is given by

    xa =5

    9a (2.33)

    =

    5

    9

    10

    9

    10

    9

    The error vector between b and p is e = b p. Thus

    e =

    1

    1

    1

    5

    9

    10

    9

    10

    9

    (2.34)

    =

    4

    9

    1

    9

    1

    9

    21

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    27/38

    This error e must be perpendicular to a. This can be proved as

    eT

    a =49

    1

    9

    1

    9

    1

    22

    = 0

    2.13.2 Projection with Trigonometry

    As shown in the above example, the vector b has been split into two parts:-

    (a) Component along the line through a given by p

    (b) Component perpendicular to the line through a given by e

    These two components thus form the sides of a right angled triangle having lengths

    ||b|| cos and ||b|| sin . Thus the components of b from trigonometry are given as

    ||p|| = ||b||cos

    and

    ||e|| = ||b||sin

    The above formula will involve calculating square roots. Hence the better way to arrive

    at ||p|| = 59

    is through the projection p = xa given by

    x =aTb

    aTa

    Thus the projection matrix can now be formulated from p = ax as

    p = aaTb

    aTa=

    aaT

    aTab = Pb

    giving

    P =aaT

    aTa

    Thus the projection matrix P onto the line through a can be given P = aaT

    aTa.

    22

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    28/38

    Example 2

    For the Example 1 calculate the projection matrix P.

    P =aaT

    aTa(2.35)

    =

    1

    2

    2

    1 2 2

    1 2 2

    1

    2

    2

    =

    1

    9

    1 2 2

    2 4 4

    2 4 4

    This matrix projects any vector b onto a as can be proved from the above Example 1,

    p = Pb (2.36)

    =1

    9

    1 2 22 4 42 4 4

    11

    1

    =

    5

    9

    10

    9

    10

    9

    Note:- If the vector a is doubled the matrix P stays the same. It still projects on

    the same line.

    If the matrix is squared, P2 equals P since P is a symmetric matrix.

    When P projects onto one subspace, I P projects onto the perpendicular subspace.

    This is the vector e.

    23

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    29/38

    2.13.3 Projection Onto a Subspace

    The above results are for the one dimensional problem of projection onto a line i.e.,

    p = xa is the projection of any vector b on the line passing through a.

    In case of an n-dimensional plane, we start with n vectors a1, a2,...,an in m where

    all the as are independent. The projection onto this n-dimensional plane is given by

    p = x1a1 + x2a2 + ... + xnan = Ax

    This projection which is closest to b is calculated from

    AT(b Ax) = 0

    which gives

    ATAx = ATb

    The symmetric matrix ATA is an n n matrix and is invertible since the as are

    independent. The solution is given by

    x = (ATA)1ATb (2.37)

    Thus the projection p can now be given as

    p = Ax = A(ATA)1ATb

    Hence ifp = Pb is the projection of b onto a then P, the projection matrix is given by

    the formula

    P = A(ATA)1AT

    If the matrix A is rectangular, then it has no inverse matrix. However when A has

    independent columns, ATA is invertible.

    For every matrix A, ATA has the same null space as A if A has linearly independent

    columns.

    Proof:- When the columns of A are linearly independent, its null space contains only

    the zero vector. In that case, if x is in null space ofA, then Ax = 0. Multiplying by AT

    gives ATAx = 0. Hence x is also in the null space of ATA.

    To prove that ATA has the same null space as A we need to prove Ax = 0 from

    ATAx = 0.

    24

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    30/38

    Multiplying by xT gives xTATAx = 0 (or) (Ax)TAx = 0 or ||(Ax)2|| = 0.

    Thus if ATAx = 0 then Ax has length zero. Thus Ax = 0. Thus every vector x in

    the null space of A is in the other null space of AT

    A. Hence proved.

    Note:- 1.The projection p of b on a is given by

    p = x1a1 + x2a2 + ... + xnan (2.38)

    This gives

    x = (ATA)1ATb (2.39)

    Thus projection p = Ax.

    2. The error is given by

    e = b p = b Ax (2.40)

    3. The projection matrix P has two properties namely

    PT = P

    and

    P2 = P

    2.14 Least Squares Solution to Ax = b

    If x is an exact solution of Ax = b, then the error defined by e = b Ax will be

    equal to zero. This is the case when the matrix A is invertible or b is in the column

    space of A.

    However quite often Ax = b has no solution. This happens when A has Full Column

    Rank (r = n);Row Rank Less (r < m), i.e., the matrix has more rows than columns.

    Simply put there are more equations than unknowns (m > n). In this case, a solution

    is possible if and only if b is in the column space of A.

    25

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    31/38

    In most practical cases of a control problem, if all the measurements are perfect, then

    b will be in the column space of A. However, if the measurements include noise or

    external disturbances or uncertainties, then b is outside the column space of A. In this

    case, the error e will not be zero. Thus we have the special case of least squares solution

    where x is the least squares solution such that the length e is as small as possible.

    Thus if we consider e = b Ax, then e is zero when x is an exact solution ofAx = b

    or in other words b is in the column space oa A. Ifb is not in the column space ofA, then

    e is not zero. Hence the problem will now be to minimise this error e to as small value

    as possible. When the length of e is as small as possible, x is a least squares solution.

    When Ax = b has no solution, then we consider the projection p of b on x which is

    connected by p = Ax. This projection p which is closest to b is calculated from

    AT(b Ax) = 0 (2.41)

    or

    ATAx = ATb (2.42)

    Thus the solution is given by

    x = (AT

    A)1

    AT

    b (2.43)

    It can be proved that x is the least squares solution or the best solution for Ax = b

    which minimiuses the error e = bAx to as small as possible through geometry, algebra,

    calculus or by setting the derivative of error to zero (differential equation).

    Example 3

    Find the closest line to the points (0, 6), (1, 0) and (2, 0).

    Solution:- Let y = Cx + D be the equation of the straight line. For the given

    problem, let the points be (x1, y1), (x2, y2) and (x3, y3). Thus x1 = 0; x2 = 1; x3 = 2.

    Substituting for x1, x2, x3, in y = Ax + B, we get

    y1 = D; y2 = C + D; y3 = 2C + D

    26

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    32/38

    Since y1 = 6; y2 = y3 = 0, we get the set of equations

    D = 6 (2.44)

    C + D = 0

    2C + D = 0

    The above system of equations does not have a solution.

    Expressing the above system of equations in matrix form we get

    0 1

    1 1

    2 1

    C

    D =

    6

    0

    0

    (2.45)

    Considering

    A =

    0 1

    1 1

    2 1

    x =

    C

    D

    and

    b =

    6

    0

    0

    then Ax = b is not solvable since b is not in the column space of A.

    To find the least squares solution, we solve

    ATAx = ATb

    i.e.,

    0 1 2

    1 1 1

    0 1

    1 1

    2 1

    x =

    0 1 2

    1 1 1

    6

    0

    0

    (2.46)

    5 3

    3 3

    x =

    0

    6

    27

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    33/38

    Thus

    x =5 3

    3 31

    06

    (2.47)

    =

    3

    5

    Thus with C = 3 and D = 5,

    y = 3 x + 5

    is the equation of the line closest to the three given points.

    The projection p of b onto the column space of A is thus given by

    p = Ax (2.48)

    p =

    0 1

    1 1

    2 1

    3

    5

    =

    5

    2

    1

    Thus the error e = b p is given as

    e =

    6

    0

    0

    5

    2

    1

    (2.49)

    =

    1

    21

    The length of e for this solution is

    ||e2|| = eTe =

    1 2 1

    1

    2

    1

    = 6

    28

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    34/38

    To check and verify:-

    (1) e must be perpendicular to both columns of A.

    Hence

    eT A1 =

    1 2 1

    0

    1

    2

    (2.50)

    = 0

    eT A2 = 1 2 1

    1

    1

    1

    = 0

    verifies the same.

    (2) p = Pb

    To verify this, first we find P.

    P = A(ATA)1AT (2.51)

    =

    0 11 12 1

    (

    0 1 2

    1 1 1

    0 11 12 1

    )1

    0 1 2

    1 1 1

    =1

    6

    0 1

    1 1

    2 1

    3 3

    3 5

    0 1 2

    1 1 1

    =1

    6

    0 1

    1 1

    2 1

    3 0 3

    5 2 1

    =1

    6

    5 2 1

    2 2 2

    1 2 5

    29

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    35/38

    Hence calculating

    p = Pb (2.52)

    =1

    6

    5 2 1

    2 2 2

    1 2 5

    6

    0

    0

    =

    5

    2

    1

    which was the earlier result.

    30

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    36/38

    Chapter 3

    Matrices and Determinants

    3.1 Introduction

    In this chapter, properties of determinants are discussed.

    3.2 Determinant of a Square Matrix

    The determinant of a square matrix is a scalar which can immediately tell us whether

    the given matrix is invertible or not. If the determinant is zero, i.e., the matrix is singular

    then it is not invertible. A square matrix is invertible if and only if its determinant is

    not equal to zero.

    31

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    37/38

    Chapter 4

    Solution to Dynamic Problems with

    Eigen Values and Eigen Vectors

    4.1 Introduction

    In this chapter, eigen values and eigen vectors are discussed.

    4.2 Linear Differential Equations and Matrices

    Steady state problems can be expressed by linear equations of the form Ax = b.

    Dynamic problems are those of the form dudt

    = Au. Their solutions change with time

    and can be decaying with time, growing with time or oscillating. Eigen values and eigen

    vectors help us in arriving at the solution of these differential equations expressed in

    matrix form in simple and easy steps.

    32

  • 7/30/2019 Matrices and Linear Algebra in Control Applications

    38/38

    Chapter 5

    Matrices and Linear

    Transformations

    5.1 Introduction

    In this chapter, linear transformations and their applications using matrices are dis-

    cussed.

    5.2 Linear Transformations

    Transformation is like a function. In case of a function, for every input x the output

    is expressed as f(x). Similarly, if v is an input vector in the vector space, V then a

    transformation T assigns an output T(v) to each input vector v. A linear transformation

    is a transformation which satisfies the two conditions of homogeneity and superposition.